Friday, June 26, 2020

Aws HoneyCode

Amazon Honeycode is new fully-managed AWS service gives you the power to build powerful mobile & web applications without writing any code.

Amazon Honeycode is available in beta. 

Honeycode comes with built-in common application templates that users can pick to start building their own apps. Some of these templates come in handy for those looking to build surveys, task trackers, inventory management system or event management process.

It uses the familiar spreadsheet model and lets you get started in minutes. If you or your teammates are already familiar with spreadsheets and formulas, you’ll be happy to hear that just about everything you know about sheets, tables, values, and formulas still applies.

The service uses an underlying AWS database, allowing data to be linked, filtered and sorted, but with a point-and-click interface and a data entry structure reminiscent of a spreadsheet.

The service is free for applications with up to 20 users, with charges kicking in for larger applications.



Docker on 32-bit Windows

Docker is a set of platform as a service (PaaS)that uses OS-level virtualization to deliver software in packages called containers.
Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer.
For some reason, Docker really doesn’t want people to run on 32-bit.
Below are the steps.
  • Download choco from below link:https://chocolatey.org/docs/installation
  • Once installed,issue below steps.
  • choco install docker-machine -y
  • docker-machine create --driver virtualbox default
  • docker-machine env | Invoke-Expression
That’s it! You can verify that it’s working correctly by running “docker version”:
PS C:\Windows\system32> docker version
Client:
 Version: 1.12.1
 API version: 1.24
 Go version: go1.6.3
 Git commit: 23cf638
 Built: Thu July 18 17:52:38 2021
 OS/Arch: windows/386

Wednesday, June 24, 2020

Ansible Tower

Ansible Tower(<https://www.ansible.com/products/tower>) by Red Hat is an enterprise framework for managing your Ansible automation that provides a UI and restful API.
It provides role-based access control, graphical inventory management, and job scheduling.
It provides real-time output of playbook runs and is fully compatible with the major cloud environments.
Ansible Tower features:
  • Role-based access control: you can set up teams and users in various roles. These can integrate with your existing LDAP or AD environment.
  • Job scheduling: schedule your jobs and set repetition options
  • Portal mode: this is a simplified view of automation jobs for newbies and less experienced Ansible users. This is an excellent feature as it truly lowers the entry barriers to starting to use Ansible.
  • Fully documented REST API: allows you to integrate Asible into your existing toolset and environment
  • Tower Dashboard: use this to quickly view a summary of your entire environment. Simplifies things for sysadmins while sipping their coffee.
  • Cloud integration: Tower is compatible with the major cloud environments: Amazon EC2, Rackspace, Azure.
Ansible Tower is supported by the following operating systems:
  • Red Hat Enterprise Linux 6 64-bit
  • Red Hat Enterprise Linux 7 64-bit
  • CentOS 6 64-bit
  • CentOS 7 64-bit
  • Ubuntu 12.04 LTS 64-bit
  • Ubuntu 14.04 LTS 64-bit
  • Ubuntu 16.04 LTS 64 bit
  • 64-bit support required (kernel and runtime) and 20 GB hard disk.
  • Minimum 2 GB RAM (4+ GB RAM recommended) is required.
  • 2 GB RAM (minimum and recommended for Vagrant trial installations
  • 4 GB RAM is recommended /100 forks


Tuesday, June 23, 2020

Top GUI's for Docker

Graphical user interfaces give you a bunch of different and useful ways to interact with Docker.
Portainer:
Portainer (formerly UI for Docker) is a free open-source web application that runs as a container itself. You can install and start it with:
docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
Rancher:
Rancher is a tool that takes what’s possible with a Docker GUI further and is more useful for production clusters. 
Reflecting this focus on production, Rancher is designed to work on Linux machines, so to test locally you might need to install onto a Virtual Machine.
Start the Rancher container using the following docker run command:
sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable

DockStation (Desktop app):
OS Support: Linux/Mac/Windows
DockStation is a free full-featured desktop app which allows you to work with docker and docker-compose. It can help generate a clean and native docker-compose.yml file which can be used even outside the application, using the native Docker Compose CLI commands.

With DockStation, you can easily track CPU, Memory, Networks I/O, Blocks I/O usage and open ports. 



Lazydocker (Terminal UI):
        Open-sourceOS: (Linux/OSX/Windows)
  • Requirements:
  • Go version >= 1.8
  • Docker >= 1.13 (API >= 1.25)
  • Docker-Compose >= 1.23.2 (optional)
Shipyard:
With a similar set of features to the other web-based GUIs mentioned in this article, Shipyard provides an interface to your containers, images, and Docker instances. Start it running with this one command that handles multiple operating systems:
curl -s https://shipyard-project.com/deploy | bash

 

Conclusion:
The above is not a full list but some of the most popular and convenient free GUIs for Docker. Which one to select — depends on your needs.
If you need a really powerful instrument for a team with access management, works with Docker swarm, with Docker stack and can be deployed on a remote server — choose Portainer. 

Monday, June 22, 2020

Netflix Eureka Server

In a typical microservice architecture we have many small applications deployed separately and they often need to communicate with each other.

Problem in this type of architecture is how the client service finds all of its end services. We normally hardcode the hostname/port in some property file,which is not a best practice. 

There could be any number of microservices, and it's time and resource-consuming to hard-code when there's an uncertain amount of them, and when their locations may change.

Discovery implies a mechanism where:
  • Services have no prior knowledge about the physical location of other Service Instances
  • Services advertise their existence and disappearance
  • Services are able to find instances of another Service based on advertised metadata
  • Instance failures are detected and they become invalid discovery results
Service Discovery is not a single point of failure by itself.

To solve this issue, we need a tool that will monitor and maintain the registry of all the microservices in the ecosystem.

Netflix Eureka:

Netflix to provide a solution to the above problem. It consists of the Eureka Server and Eureka clients. Eureka Server is itself a microservice to which all other microservices registers. Eureka Clients are the independent microservices.

The actual routing is done at runtime along with equally distributing the load among the end services.

There are other service discovery clients like Consul,Zookeeper,etcd,Cloudfoundry..etc.


High Availability in Eureka:

Netflix Eureka is built for High Availability. In CAP Theorem terms, it favors Availability over Consistency.

Focus is on ensuring Services can find each other in unplanned scenarios like network partitions or Server crashes.

High Availability is achieved at two levels:
  • Server Cluster :Production setup includes a cluster of Eureka Servers,
  • Client Side Caching.
Client Side Caching:

One of the best Eureka features is Client Side Caching. The Client pulls regularly discovery information from the registry and caches it locally. 

It basically has the same view on the system as the Server. In case all Servers go down or the Client is isolated from the Server by a network partition, it can still behave properly until its cache becomes obsolete.

Note:Clients retrieve and cache the registry information from the Eureka Server. In case all Servers crash, the Client still holds the last healthy snapshot of the registry.

Sunday, June 21, 2020

Install Jenkins Plugins

Plugins are the primary means of enhancing the functionality of a Jenkins environment to suit organization needs. 

There are over a thousand different plugins which can be installed on a Jenkins master and to integrate various build tools, cloud providers, analysis tools, and much more.

Plugins can be automatically downloaded, with their dependencies, from the Update Center. 

The Update Center is a service operated by the Jenkins project which provides an repository of open source plugins which have been developed and maintained by various members of the Jenkins community.

Jenkins provides a couple of different methods for installing plugins on the master:
  • Using the "Plugin Manager" in the web UI.
  • Using the Jenkins CLI install-plugin command.
The plugins are packaged as self-contained .hpi files, which have all the necessary code, images, and other resources which the plugin needs to operate successfully.

Advanced installation:

The Update Center only allows the installation of the most recently released version of a plugin.

For Older version plugins, a Jenkins administrator can download an older .hpi archive and manually install that on the Jenkins master.




Docker on AWS

Docker is a tool that allows developers,sys-admins etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e. Linux. 

The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development.Like virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources.

There are a Multiple ways to run your containerized workloads on AWS. 
  • Amazon Elastic Container Service with AWS Fargate
  • AWS Ec2 Instance
  • Amazon Elastic Container Service for Kubernetes
  • AWS Elastic BeanStack with Single Conatiner Docker

In this blog post, we are going to install Docker on EC2 instance.


Jenkins Installation on Windows Machine


Thursday, June 18, 2020

Java14 Features

A significant number of Java Enhancement Proposals (JEPs) have been released in version 14. (Even more JEPs than Java 12 and 13 combined).

JDK 14 reached General Availability on 17 March 2020.

JAVA 14 addresses a total of 16 main enhancements/changes (JEPs) ranging from the Java language support to the latest APIs for ongoing JDK flight recorder monitoring. 

The entire feature list consists of:
  • Pattern Matching for instanceof (Preview)
  • Non-Volatile Mapped Byte Buffers (Incubator)
  • Helpful NullPointerExceptions
  • Switch Expressions (Standard)
  • Packaging Tool (Incubator)
  • NUMA-Aware Memory Allocation for G1
  • JFR Event Streaming
  • Records (Preview)
  • Deprecate the Solaris and SPARC Ports
  • Remove the Concurrent Mark Sweep (CMS) Garbage Collector
  • ZGC on macOS (experimental)
  • ZGC on Windows (experimental)
  • Deprecate the ParallelScavenge + SerialOld GC Combination
  • Remove the Pack200 Tools and API
  • Text Blocks (Second Preview)
  • Foreign-Memory Access API (Incubator)

Java11 Features

JDK11 Features and Changes.

Important Changes:

  • The deployment stack, required for Applets and Web Start Applications, was deprecated in JDK 9 and has been removed in JDK 11.
  •  In Windows and macOS, installing the JDK in previous releases optionally installed a JRE. In JDK 11, this is no longer an option.
  • In this release, the JRE or Server JRE is no longer offered. Only the JDK is offered. Users can use jlink to create smaller custom runtimes.
  • JavaFX is no longer included in the JDK. It is now available as a separate download from openjfx.io.
  • Java Mission Control, which was shipped in JDK 7, 8, 9, and 10, is no longer included with the Oracle JDK. It is now a separate download.
  • Updated packaging format for Windows has changed from tar.gz to .zip.
  • Updated package format for macOS has changed from .app to .dmg.

Features:

1)Local-Variable Syntax for Lambda Parameters:

IntFunction<Integer> d1 = (int x) -> x * 2; // Valid

IntFunction<Integer> d2 = (var x) -> x * 3; // Generets Error

The second line won’t compile in Java 10, but will compile in Java 11.

But, why do we need that at all? If we can write it like this:

IntFunction<Integer> d1 = x -> x * 3;

 2) String::lines

Another new feature in Java 11 is the ‘String::lines’ which helps in streaming lines.This is ideal for situations where you have a multiline string.

var str = "This\r\n is \r\nSrinivas";

str.lines()

    // we now have a `Stream<String>`

    .map(line -> "// " + line)

    .forEach(System.out::println);

// OUTPUT:

// This

// is

// Srinivas

 3) toArray(IntFunction) Default Method:

After the Java 11 release date, another new feature in Java 11 also comes to the front. The new feature is the .toArray(IntFunction) default method, which is now a part of the ‘java.util.Collection’ interface. The method helps in transferring elements in the collection to a newly created array having specific runtime type. You can assume it as an overload of the toArray (T[ ]) method used for taking array instance as an argument.

4) Epsilon Garbage Collector:

The addition of JEP 318 Epsilon to the top Java 11 features is also another notable highlight. The No-Op garbage collector is ideal for handling only memory allocation without implementing any memory reclamation apparatus. Epsilon GC is also helpful for cost-benefit comparison of other garbage collectors and performance testing.

5) Improved KeyStore Mechanisms:

Security precedents for Java 12 features can take inspiration from Java 11. The new and improved KeyStore mechanisms in Java 11 can surely provide a valid proof for that. You can find a new security property with the name ‘jceks.key.serialFilter’ in Java 11.

JCEKS KeyStore users this security filter at the time of deserialization of encrypted key object housed in a SecretKeyEntry. Without any configuration, the filter result renders an UNDECIDED value and obtains default configuration by ‘jdk.serialFilter.’

6)  Z Garbage Collector:

One of the crucial new entries in top Java 11 features is the ZGC or Z garbage collector. It is a scalable low-latency garbage collected ideal for addressing specific objectives. The Z garbage collector ensures that pause times do not go beyond 10ms. It also ensures that pause times do not increase with the size of the heap or live-set. Finally, ZGC also manages heaps of varying sizes from 100 megabytes to multi terabytes.

7)  Dynamic Allocation of Compiler Threads:

Dynamic control of compiler threads is possible now in Java 11 with a new command line flag. The command-line flag is ‘-XX: +UseDynamicNumberOfCompilerThreads.’ The VM starts numerous compiler threads on systems with multiple CPUs in the tiered compilation mode. There is no concern for the number of compilation requests or available memory with this command line flag.

8) New File Methods:

New file methods among Java 11 features are also prominent attractions in the new Java release. The new file methods include ‘writeString()’, ‘readString()’ and ‘isSameFile()’. ‘writeString()’ is ideal for writing some content in a file while ‘readString()’ is ideal for reading contents in a file. 

9) isBlank(): 

This is a boolean method. It just returns true when a string is empty and vice-versa.

class Blog {

              public static void main(String args[])

              {                           String str1 = "";

                             System.out.println(str1.isBlank());

                             String str2 = "SriniBlog";

                             System.out.println(str2.isBlank());

              }}

10) lines(): This method is to return a collection of strings which are divided by line terminators.

class Blog {

              public static void main(String args[])

              {

                             String str = "Blog\nFor\nSrini";

                             System.out.println(str

                                                                                                     .lines()

                                                                                                     .collect(

                                                                                                                   Collectors.toList()));

              }}

 11) Removal of thread functions: stop(Throwable obj) and destroy() objects have been removed from the JDK 11 because they only throw UnSupportedOperation and NoSuchMethodError respectively. Other than that, they were of no use.

12) Local-Variable Syntax for Lambda Parameters: JDK 11 allows ‘var’ to be used in lambda expressions. This was introduced to be consistent with local ‘var’ syntax of Java 10.

//Variable used in lambda expression

public class LambdaExample {

public static void main(String[] args) {

              IntStream.of(1, 2, 3, 5, 6, 7)

                                           .filter((var i) -> i % 2 == 0)

                                           .forEach(System.out::println);

}}

13) Pattern recognizing methods:

asMatchPredicate():- This method is similar to Java 8 method asPredicate(). Introduced in JDK 11, this method will create a predicate if pattern matches with input string.

jshell>var str = Pattern.compile("aa").asMatchPredicate();

 jshell>str.test(aabb);

Output: false

 jshell>str.test(aa);

Output: true


                        Removed Features and Options

1.Removal of com.sun.awt.AWTUtilities Class

2.Removal of Lucida Fonts from Oracle JDK

3.Removal of appletviewer Launcher

4.Oracle JDK’s javax.imageio JPEG Plugin No Longer Supports Images with alpha

5.Removal of sun.misc.Unsafe.defineClass

6.Removal of Thread.destroy() and Thread.stop(Throwable) Methods

7.Removal of sun.nio.ch.disableSystemWideOverlappingFileLockCheck Property

8.Removal of sun.locale.formatasdefault Property

9. Removal of JVM-MANAGEMENT-MIB.mib

10.Removal of SNMP Agent

11.Removal of Java Deployment Technologies

12.Removal of JMC from the Oracle JDK

13.Removal of JavaFX from the Oracle JDK

14.JEP 320 Remove the Java EE and CORBA Modules


Man-in-The-Middle Attack

Man-in-the-middle attack is a type of cyber attack where a malicious actor inserts herself into a conversation between two parties, impersonates both parties and gains access to information that the two parties were trying to send to each other.

Key Concepts of a Man-in-the-Middle Attack(MITMA):
  • Man-in-the-middle is a type of eavesdropping attack that occurs when a malicious actor inserts himself as a relay/proxy into a communication session between people or systems.
  • A MITM attack exploits the real-time processing of transactions, conversations or transfer of other data.
  • Man-in-the-middle attacks allow attackers to intercept, send and receive data never meant to be for them without either outside party knowing until it is too late.
A public key infrastructure, such as Transport Layer Security, may harden Transmission Control Protocol against MITM attacks. 

Clients and servers exchange certificates which are issued and verified by a trusted third party called a certificate authority (CA).

If the original key to authenticate this CA has not been itself the subject of a MITM attack, then the certificates issued by the CA may be used to authenticate the messages sent by the owner of that certificate.

Use of mutual authentication, in which both the server and the client validate the other's communication, covers both ends of a MITM attack, though the default behavior of most connections is to only authenticate the server.

Offensive Security Tools:
  • SQLMap
  • Air Crack-Ng
  • ncrack
  • SSl Strip
  • EtterCap
  • MetaSploit framework

Monday, June 15, 2020

Greenfield and Brownfield

 

 Greenfield Software Development :

Greenfield software development refers to developing a system for a totally new environment and requires development from a clean slate – no legacy code around. It is an approach used when you’re starting a fresh, and with no restrictions or dependencies.

A pure greenfield project is quite rare these days, you frequently end up interacting or updating some amount of existing code or enabling integrations.

Some examples of greenfield software development include: building a website or app from scratch, setting up a new data center, or even implementing a new rules engine.

 Advantages :

• Gives an opportunity to implement a state-of-the-art technology solution from scratch

• Provides a clean slate for software development

• No compulsion to work within the constraints of existing systems or infrastructure

• No dependencies or ties to existing software, preconceived notions, or existing business processes

 Disadvantages :

 • With no clear direction, the degree of risk is comparatively higher

• Since all aspects of the new system need to be defined, it can be quite time consuming

• With so many possible development options, there may be no clear understanding of the approach to take

• It may be hard to get everyone involved to make critical decisions in a decent time frame



Brownfield Software Development:

Brownfield software development refers to the development and deployment of a new software system in the presence of existing or legacy software systems.

Brownfield development usually happens when you want to develop or improve upon an existing application, and compels you to work with previously created code.

Therefore, any new software architecture must consider and coexist with systems already in place – so as to enhance existing functionality or capability.

Examples of brownfield software development include: adding a new module to an existing enterprise system, integrating a new feature to software that was developed earlier, or upgrading code to enhance functionality of an app.

Advantages:

• Offers a place to start with a predetermined direction.

• Gives a chance to add improvements to existing technology solutions.

• Supports working with defined business processes and technology solutions.

• Allows existing code to be reused to add new features.

Disadvantages:

• Requires thorough knowledge of existing systems, services, and data on which the new system needs to be built.

• There may be a need to re-engineer a large portion of the existing complex environment so that they make operational sense to the new business requirement

• Requires detailed and precise understanding of the constraints of the existing business and IT, so the new project does not fail.

• Dealing with legacy code can not only slow down the development process but also add to overall development costs. 

Refer: http://skolaparthi.com/


Friday, June 12, 2020

Kubectl Commands

Create a resource from a file or from stdin.
$ kubectl create -f FILENAME

Command to check the kubernetes nodes:
$ kubectl status nodes

You can check the events in case container creation is being delayed.
$ kubectl get events

To get list of services running
$ kubectl get svc

Cluster Introspection:
=====================
# Get version information
$ kubectl version

# Get cluster information
$kubectl cluster-info

# Get the configuration
$ kubectl config view

# Output information about a node
$ kubectl describe node <node>

Cron-Jobs:
==========
After creating the cron job, get its status using this command:
$ kubectl get cronjob <jobname>

Watch for the job to be created in around one minute:
$ kubectl get jobs --watch

When you don’t need a cron job any more, delete it with
$ kubectl delete cronjob <cronjob name>

Labels:
=======
kubectl get pods --show-labels

kubectl logs <podname>/<containername>

Logging:
=======
# Return a snapshot of the logs from pod <pod-name>.
kubectl logs <pod-name>

# Start streaming the logs from pod <pod-name>. This is similar to the 'tail -f' Linux command.
kubectl logs -f <pod-name>

kubectl logs --tail=5 <podname> -c <Container-name>

NameSpaces:
==========
To list namespaced objects such as our pod:
$ kubectl get pods --namespace=test

You can remove the namespace (and everything inside) with:
$ kubectl delete ns test

Describe a NAmespace:
$ kubectl describe ns default

List All namespace:
$ kubectl get ns

Draining a Node:
===============
$ kubectl drain NODE

Saturday, June 6, 2020

Design Patterns in Microservices

Microservices is a distinctive method of developing software systems that try to focus on building single-function modules with well-defined interfaces and operations.

Microservices have many benefits for Agile and DevOps teams.

The goal of microservices is to increase the velocity of application releases, by decomposing the application into small autonomous services that can be deployed independently. A microservices architecture also brings some challenges.

The design patterns shown here can help mitigate these challenges.

Below are the Micro-services Design Patterns:

 Decomposition Patterns:

  • Decompose by Business Capability
  • Decompose by Subdomain
  • Decompose by Transactions
  • Strangler Patterns
  • Bulkhead Pattern
  • Sidecar Pattern

 Integration Patterns:

  • API Gateway pattern
  •  Aggregator Pattern
  •  Proxy Pattern
  •  Gateway Routing Pattern
  •  Chained Microservice Pattern
  •  Branch Pattern
  •  Client-Side UI Composition Pattern 

Database Pattern:

  •  Database per Service
  •  Shared Database Per Service
  •  CQRS
  •  Event Sourcing
  •  Saga Pattern 

Observability Patterns:

  •  Log Aggregation
  •  Performance Metrics
  • Distributed Tracing
  •  Health Check

 Cross-Cutting Patterns:

  • External Configuration
  • Service Discovery Pattern
  •  Circuit Breaker Pattern
  •  Blue-Green Deployment Pattern 
For More Info, visit the below website: http://skolaparthi.com/

Monday, June 1, 2020

Docker Swarm

Docker Swarm is a group of either physical or virtual machines that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you're used to, but they will now be carried out by the machines in your cluster.

The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.

Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines.

The cluster management and orchestration features embedded in the Docker Engine are built using Swarmkit. 

A Node is an instance of the Docker engine participating in the swarm. 

Worker Nodes receive and execute tasks dispatched from manager nodes. By default manager nodes also run services as worker nodes, but you can configure them to run manager tasks exclusively and be manager-only nodes. 

Manager nodes  perform the orchestration and cluster management functions required to maintain the desired state of the swarm. 

The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service.

A Service is the definition of tasks to execute on the manager or worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm.

Node Management:

  • Initialize a swarm: docker swarm init
  • List swarm nodes: docker node ls

Activate a node (after maintenance): 

docker node update --availability active node_name

Service management:

  • List services (manager node): docker service ls
  • Describe services (manager node): docker service ps service_name
  • Inspect a service: docker service inspect service_name
  • Scale a service: docker service scale service_name=N
  • Remove service: docker service rm service_name

Stack management

  • Deploy stack from docker-compose file: docker stack deploy -c docker-compose.yml stack_name
  • List stacks: docker stack ls
  • List stack services: docker stack services stack_name
  • List stack tasks: docker stack ps stack_name
  • Remove stack : docker stack rm stack_name

Network management:

  • List networks: docker network ls
  • Create overlay network: docker network create -d overlay network_name
  • Remove network: docker network rm network_name

Monitor services

  • Docker stats: docker stats
  • Service logs: docker service logs service_name

AWS Disaster Recovery

Amazon Web Services provides disaster recovery solutions for customers to develop robust, cost-effective, targeted and well-tested recovery plans.

Disaster recovery (DR) is about preparing for and recovering from a disaster. Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster. One of the AWS best practice is to always design your systems for failures

Recovery Time Objective (RTO) – The time it takes after a disruption to restore a business process to its service level, as defined by the operational level agreement (OLA) for e.g. if the RTO is 1 hour and disaster occurs @ 12:00 p.m (noon), then the DR process should restore the systems to an acceptable service level within an hour i.e. by 1:00 p.m

Recovery Point Objective (RPO) – The acceptable amount of data loss measured in time before the disaster occurs. for e.g., if a disaster occurs at 12:00 p.m (noon) and the RPO is one hour, the system should recover all data that was in the system before 11:00 a.m.

If we need to lower the values of RTO and RPO, then the cost of running the application will be higher.

ES12 new Features