Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Saturday, September 25, 2021

Keycloak

 Keycloak is an open source Identity and Access Management solution targeted towards modern applications and services.

Keycloak offers features such as Single-Sign-On (SSO), Identity Brokering and Social Login, User Federation, Client Adapters, an Admin Console, and an Account Management Console.

Below are keycloak features:

1)Multiple protocols support

2)SSO support

3)Offers Web based GUI

4)External Identity Source Sync

In case when your client currently has some type of user database, Keycloak allows us to synchronize with such database. By default, it supports LDAP and Active Directory but you can create custom extensions for any user database using Keycloak User storage API.

4)Identity Brokering

Keycloak can also work as a proxy between your users and some external identity provider or providers. Their list can be edited from Keycloak Admin Panel.

5)Social Identity Providers

Additionally, Keycloak allows us to use Social Identity Providers. It has built-in support Google, Twitter, Facebook, Stack Overflow but, in the end, you have to configure all of them manually from admin panel. 

6)Customizations

Currently  Keycloak supports following distributions.

1)server

2)Docker Image

3)Operator

Link: https://www.keycloak.org/

Monday, December 14, 2020

Amazon Connect

 Amazon Connect is an Amazon Web Services public cloud customer contact center service.

Amazon Connect enables customer service representatives to respond to phone calls or chat inquiries from end customers just as if the contact center infrastructure was set up and managed on premises. 

According to Amazon, the service can scale to accommodate tens of thousands of call center agents. AWS provides a telecommunication infrastructure for each company that uses the service. 

To access the service, users are required to have an Amazon Connect account or an AWS Identity and Access Management account.

How Amazon Connect works

The customer who requested the service becomes the administrator for Amazon Connect. The admin's first task is to claim a phone number and configure permissions for users, which include operators, contact center managers and agents. 

To deploy the service, the admin must first create an Amazon Connect cloud instance. To do this, the customer logs into their AWS Management Console and completes several tasks. 

First, the AWS customer must create or select a user directory, which can include an outside option, such as Microsoft Active Directory. Next, the customer creates a user with administrator privileges and then selects telephony options (such as whether the contact center needs to place calls, receive calls or both) and the location for data storage.

Amazon Connect: Contact Flows

A Contact Flow defines each step a customer can make when they interact with the contact center.

The logic is similar to that of an automated Interactive Voice Response (IRV) system and determines an end user's experience. An admin can configure a Contact Flow through a graphical user interface  in the console to play a recorded prompt, ask a caller a question, manage call recording settings or transfer a call.

A Contact Flow also supports text to speech with Amazon Polly and allows developers to customize pronunciation, speech rate and volume with Simple Speech Markup Language.

In addition, Amazon Connect can route end user calls according to agent skills, availability and caller priority. The service assigns agents a routing profile to fit agent expertise to one or more call queues.

Connect also provides a Contact Flow Logs feature, which allows a business to track and streamline its interactions with end customers via the contact center. Amazon CloudWatch stores these logs in the same region as the Connect instance.

Additionally, the Connect service provides nearly 100 metrics -- both in historical and real-time reports -- to help monitor contact center performance. An admin can export these reports to Simple Storage Service buckets, where they will be encrypted.

Software, hardware support

Amazon Connect requires the open source WebRTC, and supports a number of web browsers, including Google Chrome and Mozilla Firefox.

Amazon Connect enables voice interactions via a public switched telephone network provided by AMCS LLC. The service supports dual-tone multi-frequency signals, text-to-speech conversion via Amazon Polly and natural language interactions via Amazon Lex. 

Contact center agents use either a web-based softphone or traditional phone to interact with end users and a Contact Control Panel to manage these interactions.

In addition to its interactions with Amazon cloud services, Amazon Connect integrates with third-party customer relationship management (CRM), workforce management and analytics tools.

Languages and costs

Amazon Connect provides call center support for English, Spanish, Brazilian Portuguese, Korean, German, Simplified Chinese and Japanese.

Connect is a pay-as-you-go service. After a user exhausts its free tier of service, AWS charges for Connect use by the minute, plus daily telecom charges.

Sunday, September 27, 2020

AWS Greengrass

 AWS IoT Greengrass is software functionality that connects cloud capabilities to local devices. This facilitates devices in data collection and analytics closer to the source (devices on IoT), quicker response time, and secure communication over local networks, even when they are not connected to the cloud. 

These devices are collectively known as a Greengrass group. 

Greengrass groups are configured and defined from the cloud but do not “need” the cloud to connect and communicate with each other.

In AWS IoT Greengrass, devices securely communicate over a local network and exchange data without a connection to the cloud. AWS IoT Greengrass achieves this through a local pub/sub message manager that can buffer messages even while there is no connectivity, thus preserving messages to and from the cloud.

Building an AWS Greengrass:

Step 1: Establish a Greengrass core:

Every group needs a Greengrass core to function. Central to this group is a physical device, on which the Greengrass core software is installed. The core software securely connects the device to the AWS. There can be only one core to a group.

Step 2: Building a group:

Once the core is established, we can continue to add devices to the group that are themselves on the cloud, or other devices which are AWS IoT provisioned, or AWS Lambda functions – which are essentially simple programs that can process or respond to data. Presently, a Greengrass group can contain up to 200 devices. A device can be a member of up to 10 groups.

Step 3: Code, the group:

Once deployed, the core and devices can communicate, even without a connection. IoT vs. AWS IoT Greengrass: The IoT (Internet of things) is a network of connected devices called “Things” that are connected to a cloud server (a fit-bit or a fridge or a thermal sensor could be a “thing”).

 Data aggregated from these things on to the cloud server, data could be monitored in real-time and react/respond immediately.

Monday, August 17, 2020

Aws Copilot

 It is a tool for developers to develop, release, and operate production-ready containerized applications on Amazon ECS. From getting started, pushing to staging and releasing to production, Copilot can help manage the entire lifecycle of your application development.

AWS Copilot is a tool in the Container Tools category of a tech stack.

Copilot creates modern application deployments by default, based on production-ready patterns that include best practices designed by ECS engineers and customers over the years.

The AWS Copilot command line interface (CLI) provides application-first, high-level commands to simplify modeling, creating, releasing, and managing production-ready containerized applications on Amazon ECS from a local development environment.

We can use Homebrew to install the cli.Installing the AWS Copilot CLI using Homebrew.

The following command is used to install the AWS Copilot CLI on your macOS or Linux system using Homebrew. 

Prior to installation, you should have Homebrew installed. 

brew install aws/tap/copilot-cli

or

sudo curl -Lo /usr/local/bin/copilot https://github.com/aws/copilot-cli/releases/download/v0.3.0/copilot-linux-v0.3.0  \  && sudo chmod +x /usr/local/bin/copilot \   && copilot --help

Below is the Architecture:

Copilot has three main concepts:

Application : An application is a grouping mechanism for the pieces of your system. Following Conway’s Law you would split up your components into Copilot applications that correspond to the different teams in your organization. 

For example, if you still have a small organization with a unified development team that works on a bit of everything then you can probably organize things as a single application made up out of one or more services. But if you have multiple teams, each responsible for a single group of components, and very little cross team work, then each team should have their own Copilot application.

Environment : An environment is one stage of deployment of an application. For example, you might deploy an application to a “QA” environment. 

Service: A service is a single long running code process inside a container. An application consists of one or more services. If you are using a monolithic architecture, then it’s likely each application will have just a single service. A more distributed architecture will utilize multiple services for each application. 

For example, you might have a “website” service with an internet facing load balancer, an internal “API” service that is only accessible via service discovery, and a “background worker” service that works on jobs off a queue. Together these services make up the components of a single application.

Friday, June 26, 2020

Aws HoneyCode

Amazon Honeycode is new fully-managed AWS service gives you the power to build powerful mobile & web applications without writing any code.

Amazon Honeycode is available in beta. 

Honeycode comes with built-in common application templates that users can pick to start building their own apps. Some of these templates come in handy for those looking to build surveys, task trackers, inventory management system or event management process.

It uses the familiar spreadsheet model and lets you get started in minutes. If you or your teammates are already familiar with spreadsheets and formulas, you’ll be happy to hear that just about everything you know about sheets, tables, values, and formulas still applies.

The service uses an underlying AWS database, allowing data to be linked, filtered and sorted, but with a point-and-click interface and a data entry structure reminiscent of a spreadsheet.

The service is free for applications with up to 20 users, with charges kicking in for larger applications.



Sunday, June 21, 2020

Docker on AWS

Docker is a tool that allows developers,sys-admins etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e. Linux. 

The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development.Like virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources.

There are a Multiple ways to run your containerized workloads on AWS. 
  • Amazon Elastic Container Service with AWS Fargate
  • AWS Ec2 Instance
  • Amazon Elastic Container Service for Kubernetes
  • AWS Elastic BeanStack with Single Conatiner Docker

In this blog post, we are going to install Docker on EC2 instance.


Monday, June 1, 2020

AWS Disaster Recovery

Amazon Web Services provides disaster recovery solutions for customers to develop robust, cost-effective, targeted and well-tested recovery plans.

Disaster recovery (DR) is about preparing for and recovering from a disaster. Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster. One of the AWS best practice is to always design your systems for failures

Recovery Time Objective (RTO) – The time it takes after a disruption to restore a business process to its service level, as defined by the operational level agreement (OLA) for e.g. if the RTO is 1 hour and disaster occurs @ 12:00 p.m (noon), then the DR process should restore the systems to an acceptable service level within an hour i.e. by 1:00 p.m

Recovery Point Objective (RPO) – The acceptable amount of data loss measured in time before the disaster occurs. for e.g., if a disaster occurs at 12:00 p.m (noon) and the RPO is one hour, the system should recover all data that was in the system before 11:00 a.m.

If we need to lower the values of RTO and RPO, then the cost of running the application will be higher.

Sunday, May 31, 2020

S3 Service

·       Number of objects you can store in an Amazon S3 bucket is virtually unlimited.
·       Allows you to write, read, and delete objects containing from 1 byte to 5 terabytes of data each.
·       Provides data lifecycle management capabilities, allowing users to define rules to automatically archive Amazon S3 data to Amazon Glacier, or to delete data at end of life.
·       In S3,objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or AWS KMS-managed keys (SSE-KMS).
·       Amazon S3 stores the archived objects in Amazon Glacier. However, these are Amazon S3 objects, and you can access them only by  using the Amazon S3 console or the Amazon S3 API. You cannot access the archived objects through the Amazon Glacier console or the Amazon Glacier API
·       100 buckets per account can be created.
·       For objects greater than 5gb upto 5tb,u can use "multipart upload api".
·       S3 Cross-Region Replication (CRR) is configured to a source S3 bucket and replicates objects into a destination bucket in another AWS Region.
·       Amazon S3 Same-Region Replication (SRR), replicates objects between buckets in the same region.
·       SSE Data EncryptionWithin Amazon S3, Server Side Encryption (SSE) is the simplest data encryption option available.
·       SSE encryption manages the heavy lifting of encryption on the AWS side, and falls into two types: SSE-S3 and SSE-C.The SSE-S3 option lets AWS manage the key for you, which requires that you trust them with that information.
·       While Amazon S3 is ideal for hosting static websites, dynamic websites requiring server side interaction, scripting or database interaction cannot be hosted and should rather be hosted on Amazon EC2.
·       S3 also regularly verifies the integrity of data stored using checksums. If Amazon S3 detects data corruption, it is repaired using redundant data.
·       In addition, S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data
·       Data protection against accidental overwrites and deletions can be added by enabling Versioning to preserve, retrieve and restore every version of the object stored
·       S3 also provides the ability to protect data in-transit (as it travels to and from S3) and at rest.
·       S3 Object lifecycle management allows 2 types of behavior
·       Transition in which the storage class for the objects change
·       Expiration where the objects are permanently deleted.

Friday, May 29, 2020

Edge Computing

Edge computing is a networking philosophy focused on bringing computing as close to the source of data as possible in order to reduce latency and bandwidth use.

 In simpler terms, edge computing means running fewer processes in the cloud and moving those processes to local places, such as on a user’s computer, an IoT device, or an edge server. Bringing computation to the network’s edge minimizes the amount of long-distance communication that has to happen between a client and server.


 

 

The increase of IoT devices at the edge of the network is producing a massive amount of data to be computed at data centers, pushing network bandwidth requirements to the limit.Despite the improvements of network technology, data centers cannot guarantee acceptable transfer rates and response times, which could be a critical requirement for many applications. Furthermore, devices at the edge constantly consume data coming from the cloud, forcing companies to build content delivery networks to decentralize data and service provisioning, leveraging physical proximity to the end user.

 In a similar way, the aim of Edge Computing is to move the computation away from data centers towards the edge of the network, exploiting smart objects, mobile phones or network gateways to perform tasks and provide services on behalf of the cloud. 

By moving services to the edge, it is possible to provide content caching, service delivery, storage and IoT management resulting in better response times and transfer rates


ES12 new Features