Monday, August 17, 2020

Aws Copilot

 It is a tool for developers to develop, release, and operate production-ready containerized applications on Amazon ECS. From getting started, pushing to staging and releasing to production, Copilot can help manage the entire lifecycle of your application development.

AWS Copilot is a tool in the Container Tools category of a tech stack.

Copilot creates modern application deployments by default, based on production-ready patterns that include best practices designed by ECS engineers and customers over the years.

The AWS Copilot command line interface (CLI) provides application-first, high-level commands to simplify modeling, creating, releasing, and managing production-ready containerized applications on Amazon ECS from a local development environment.

We can use Homebrew to install the cli.Installing the AWS Copilot CLI using Homebrew.

The following command is used to install the AWS Copilot CLI on your macOS or Linux system using Homebrew. 

Prior to installation, you should have Homebrew installed. 

brew install aws/tap/copilot-cli

or

sudo curl -Lo /usr/local/bin/copilot https://github.com/aws/copilot-cli/releases/download/v0.3.0/copilot-linux-v0.3.0  \  && sudo chmod +x /usr/local/bin/copilot \   && copilot --help

Below is the Architecture:

Copilot has three main concepts:

Application : An application is a grouping mechanism for the pieces of your system. Following Conway’s Law you would split up your components into Copilot applications that correspond to the different teams in your organization. 

For example, if you still have a small organization with a unified development team that works on a bit of everything then you can probably organize things as a single application made up out of one or more services. But if you have multiple teams, each responsible for a single group of components, and very little cross team work, then each team should have their own Copilot application.

Environment : An environment is one stage of deployment of an application. For example, you might deploy an application to a “QA” environment. 

Service: A service is a single long running code process inside a container. An application consists of one or more services. If you are using a monolithic architecture, then it’s likely each application will have just a single service. A more distributed architecture will utilize multiple services for each application. 

For example, you might have a “website” service with an internet facing load balancer, an internal “API” service that is only accessible via service discovery, and a “background worker” service that works on jobs off a queue. Together these services make up the components of a single application.

Monday, August 10, 2020

Jdbc Connection Pool

 A JDBC connection pool is a group of reusable connections for a particular database. 

Because creating each new physical connection is time consuming, the server maintains a pool of available connections to increase performance.

 When an application requests a connection, it obtains one from the pool. When an application closes a connection, the connection is returned to the pool.

When creating a connection pool , you are actually defining the aspects of a connection to a specific database. Before creating the pool, you must first install and integrate the JDBC driver. 

The properties of connection pools can vary with different database vendors.

  1. Some common properties are the database’s name (URL), user name, and password.
  2. Certain data specific to the JDBC driver and the database vendor must be entered. Before proceeding, gather the following information:
  3. Database vendor name
  4. Resource type, such as javax.sql.DataSource (local transactions only) javax.sql.XADataSource (global transactions)
  5. Data source class name: If the JDBC driver has a Datasource class for the resource type and database, then the value of the Datasource Classname field is required.
  6. Required properties, such as the database name (URL), user name, and password

A JDBC connection pool is a group of reusable connections for a particular database. When creating the pool with the Administration Console, the Administrator is actually defining the aspects of a connection to a specific database.

Before creating the pool, you must first install and integrate the JDBC driver. When building the Create Connection Pool pages, certain data specific to the JDBC driver and the database vendor must be entered. 

Before proceeding, gather the following information:

  1. Database vendor name
  2. Resource type, such as javax.sql.DataSource (local transactions only) javax.sql.XADataSource (global transactions)
  3. Data source class name
  4. Required properties, such as the database name (URL), user name, and password

Connection Pool Libraries considered for analysis:

1) Apache Commons DBCP2:
  • Easy to configure. Very detailed configuration page.
  • Supports prepared statement caching
  • Does not validate connections on borrow
  • Does not reset transaction isolation when returns to the pool
  • Does not reset the read-only status of the connection
  • Does not reset the auto-commit status of the connection
  • Does not clear SQL warnings
  • DBCP2 compiles and runs under Java 7 only (JDBC 4.1), not sure if it runs on Java 8
  • More than 60 classes in the library
  • Does not validate connections on borrow
  • The code is hosted on GitHub https://github.com/apache/commons-dbcp. The repository has 21 contributors and only 2 contributions in the year 2017. A lot of broken links on the home page, java doc links are broken.
  • I logged into Apache Bugtracker (JIRA) to check for the number of bugs, there are 11 open and reopened bugs.
  • Licensed under Apache 2.0
2)C3P0:
C3P0 is one of the oldest and best-known libraries. It is very well documented. However, according to some reports, it’s easy to misconfigure the pool and can result in poor performance and deadlocks.
  • It comes in a package with hibernate.
  • Works well in the heavy concurrent environment.
  • Supports prepared statement pooling
  • Resets auto commit status when returning to the pool
  • Resets read the only status when returning to the pool
  • Resets isolation level when returning to the pool
  • Does not test connections at getConnection()
  • Around 200 classes in the library. Huge code base compared to others.
  • The code is hosted on Github -> Repository, the library has 5 contributors with 5 contributions to the code this year, 4th May had the last check-in.
  • Issues are tracked on Github tracker and have 41 issues listed.
  • The library is licensed under LGPL v.2.1 and EPL v.1.0
3)Tomcat JDBC:
  • Supports highly concurrent environments and multi-core/CPU systems.
  • It is very well documented here.
  • Async connection retrieval. Can queue connection requests and return Future back.
  • Ability to configure custom interceptors to enhance functionality.
  • It provides multiple ways to configure the pool. One is inside the Apache container, JMX and standalone by using DataSource bean.
  • It does not by default reset auto-commit and transaction levels for connections in the pool, users must configure custom interceptors to do this.
  • Does not by default test connections on getConnection()
  • It does not close abandoned open statements.
  • Pool prepared statement properties are not used.
  • The code is hosted on GitHub. It has 13 active contributors.
  • This is a very active community, one can see looking at the commits and bug list.
  • licensed under Apache 2.0
4) HikariCP:
  • Very light library, around 130 kb.
  • Tests connections at getConnection()
  • Tracks and closes abandoned connections.
  • Clears connections before returning the connection to the client.
  • Resets auto-commit, transaction isolation, read-only status
  • The library has 58 active contributors and has a very detailed configuration page with performance benchmarks.
  • The library has 33 issues listed on GitHub
  • Licensed under Apache 2.0

Monday, August 3, 2020

Introduction to Quartz Scheduler

Quartz is a job scheduling library that can be integrated into a wide variety of Java applications.
Quartz is generally used for enterprise class applications to support process workflow, system management actions and to provide timely services within the applications.
You can create complex schedules for executing any job. Examples are e.g. tasks that run daily, every other Monday 8:30 p.m. or only on the last day of every month. 

You can download Quartz from http://quartz-scheduler.org/downloads

Components:
============
Quartz job:
Quartz job is used for the logic or code which you want to execute. It implements org.quartz.Job interface.

Quartz trigger:
Quartz trigger is used to define the moment when the quartz scheduler will execute quartz’s job.

Types of quartz trigger:
1. SimpleTrigger – SimpleTrigger setting start time, end time, repeat count and repeat interval for quartz’s job.
2.CronTrigger – CronTrigger uses cron expressions to specify the moment when to execute quartz’s job.
 A cron expression is made up of seven sub expressions:
Seconds
Minutes
Hours
Day-of-Month
Month
Day-of-Week
Year (optional field)
3. Scheduler class – Scheduler class is used to connect the quartz job and quartz trigger together and execute the job.
4.JobListener:
JobListener provides the facility to track the status of running jobs. To write a JobListener we have to implements the JobListener interface.

Quartz API:
===========
The key interfaces of the API are:
Scheduler – the primary API for interacting with the scheduler of the framework
Job – an interface to be implemented by components that we wish to have executed
JobDetail – used to define instances of Jobs
Trigger – a component that determines the schedule upon which a given Job will be performed
JobBuilder – used to build JobDetail instances, which define instances of Jobs
TriggerBuilder – used to build Trigger instances




Wednesday, July 29, 2020

Multi-Module Project using Maven

A Spring Boot project that contains nested maven projects is called the multi-module project.

In the multi-module project, the parent project works as a container for base maven configurations.A multi-module project is defined by a parent POM referencing one or more submodules.

The parent maven project must contain the packaging type pom that makes the project as an aggregator. 

The pom.xml file of the parent project consists the list of all modules, common dependencies, and properties that are inherited by the child projects. The parent pom is located in the project's root directory. The child modules are actual Spring Boot projects that inherit the maven properties from the parent project.

Benefits of Using Multi-Modules:
The significant advantage of using this approach is that we may reduce duplication.
Let's say we have an application which consists of several modules, let it be a front-end module and a back-end module.
 
Now, we work on both of them and change functionality which affects the two. In that case, without a specialized build tool, we'll have to build both components separately or write a script which would compile the code, run tests and show the results.

Then, after we get even more modules in the project, it will become harder to manage and maintain.
Besides, in the real world, projects may need certain Maven plugins to perform various operations during build lifecycle, share dependencies and profiles or include other BOM projects.

Therefore, when leveraging multi-modules, we can build our application's modules in a single command and if the order matters, Maven will figure this out for us. Also, we can share a vast amount of configuration with other modules.

Parent POM:
Maven supports inheritance in a way that each pom.xml file has the implicit parent POM, it's called Super POM and can be located in the Maven binaries.
These two files are merged by Maven and form the Effective POM.


Introduction to SpringBoot

Spring Boot is an open-source micro framework maintained by a company called Pivotal.
 It provides Java developers with a platform to get started with an auto configurable production-grade Spring application. With it, developers can get started quickly without losing time on preparing and configuring their Spring application.

Spring Boot is built on top of the Spring framework, and it comes with many dependencies that can be plugged into the Spring application.
Some examples are Spring Kafka, Spring LDAP, Spring Web Services, and Spring Security. However, developers have to configure each building brick themselves using a lot of XML configuration files or annotations.

In Spring Boot, there is no requirement for XML configuration (deployment descriptor). It uses convention over configuration software design paradigm that means it decreases the effort of the developer.

Features:
  • Create stand-alone Spring applications
  • Embed Tomcat, Jetty or Undertow directly (no need to deploy WAR files)
  • Provide opinionated 'starter' dependencies to simplify your build configuration
  • Automatically configure Spring and 3rd party libraries whenever possible
  • Provide production-ready features such as metrics, health checks, and externalized configuration
  • Absolutely no code generation and no requirement for XML configuration

Setup Spring Boot:
  • Setup Java JDK from Oracle’s official site.
  • Download and Setup STS(Spring Tools Suite).
  • Start a new spring starter project
  • Click on File -> New -> Spring starter project
  • Fill the appropriate details and add dependency and finish.
  • Edit the application properties.
  • Run the main file as a Java application.

Saturday, July 25, 2020

Kong Api Gateway

Kong Api Gateway
Kong Gateway is an open-source, lightweight API gateway optimized for microservices, delivering unparalleled latency performance and scalability.

Kong can be deployed, in a variety of configurations, as an edge API gateway, as an internal API proxy, or even as a sidecar in a service mesh configuration.



Key Concepts in Kong:

  • API Object – wraps properties of any HTTP(s) endpoint that accomplishes a specific task or delivers some service. Configurations include HTTP methods, endpoint URIs, upstream URL which points to our API servers and will be used for proxying requests, maximum retires, rate limits, timeouts, etc.
  • Consumer Object – wraps properties of anyone using our API endpoints. It will be used for tracking, access control and more
  • Upstream Object – describes how incoming requests will be proxied or load balanced, represented by a virtual hostname
  • Target Object – represents the services are implemented and served, identified by a hostname (or an IP address) and a port. Note that targets of every upstream can only be added or disabled. A history of target changes is maintained by the upstream
  • Plugin Object – pluggable features to enrich functionalities of our application during the request and response lifecycle. For example, API authentication and rate limiting features can be added by enabling relevant plugins. Kong provides very powerful plugins in its plugins gallery
  • Admin API – RESTful API endpoints used to manage Kong configurations, endpoints, consumers, plugins, and so on

There are two Kong modalities:
  • Community Edition: Kong’s CE version boasts a comprehensive range of functionalities, including open-source plugin support, load balancing and service discovery, bud does not include a management panel. Therefore, we will need to configure Kong via REST, or using an open-source dashboard, such as Konga or Kong Dashboard.
  • The Enterprise Edition features expanded out-of-the-box functionalities, such as the management dashboard, security plugins, metrics and 24×7 support, to name a few.
Kong Download Link : https://konghq.com/kong/itm_source=website&itm_medium=nav

Tuesday, July 21, 2020

Akka Toolkit

Akka is a free and open-source toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM.

Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Er-lang.

Akka handles all of the underlying threading complexities by allowing you to just focus on dividing your tasks into actors, defining messages to pass among actors, and wiring together their communication logic.

The real power of Akka is realized when you distribute actors across multiple machines.

Akka folows Actor Model.

In the actor model, the actor is represented as an individual entity. 
Some characteristics of the actor are as follows:
  • Actor encloses the state and application logic
  • Actor interacts only using messages.
  • Every actor has an exclusive address and mailbox through which he can receive messages from others.
  • Mailbox messages are processed by the actor in consecutive order.
  • Tree hierarchy is used to represent the actor system.
Benefits of using the Actor Model:
The following characteristics of Akka allow you to solve difficult concurrency and scalability challenges in an intuitive way:
  • Event-driven model — Actors perform work in response to messages. Communication between Actors is asynchronous, allowing Actors to send messages and continue their own work without blocking to wait for a reply.
  • Strong isolation principles — Unlike regular objects in Java, an Actor does not have a public API in terms of methods that you can invoke. Instead, its public API is defined through messages that the actor handles. This prevents any sharing of state between Actors; the only way to observe another actor’s state is by sending it a message asking for it.
  • Location transparency — The system constructs Actors from a factory and returns references to the instances. Because location doesn’t matter, Actor instances can start, stop, move, and restart to scale up and down as well as recover from unexpected failures.
  • Lightweight — Each instance consumes only a few hundred bytes, which realistically allows millions of concurrent Actors to exist in a single application.
Akka Actors:
  • Ask Pattern
  • Ask Pattern mapTo
  • Ask Pattern pipeTo
  • Actor Hierarchy
  • Actor Lookup
  • Child actors
  • Actor Lifecycle
  • Actor PoisonPill
  • Error Kernel Supervision
Akka Routers:
  • RoundRobinPool
  • ScatterGatherFirstCompletedPool
  • TailChoppingPool
  • BroadcastPool
Akka Dispatchers:
  • Akka Default Dispatcher
  • Akka Lookup Dispatcher
  • Fixed Thread Pool Dispatcher
  • Resizable Thread Pool Dispatcher
  • Pinned Thread Pool Dispatcher

Akka HTTP: 

As the name implies, this module is typically best suited for middle-tier applications which require an HTTP endpoint. As an example, you could use Akka HTTP to expose a REST endpoint that interfaces with a storage layer such as a database. For additional information, you can refer to the official Akka documentation on Akka HTTP.


Akka Streams:

This module is useful when you are working on data pipelines or even stream processing. For additional information, you can refer to the official Akka documentation on Akka Streams.


Akka Networking:

This module provides the foundation for having actor systems being able to connect to each other remotely over some predefined network transport such as TCP. For additional information, you can refer to the official Akka documentation on Akka Networking.


Akka Clustering:

This module is an extension of the Akka Networking module. It is useful in scaling distributed applications by have actors form a quorum and work together by some predefined membership protocol. For additional information, you can refer to the official Akka documentation on Akka Clustering.


For Creating a Simple Application,use the below link

https://developer.lightbend.com/start/?group=akka&project=akka-quickstart-java




 

ES12 new Features