Showing posts with label Devops. Show all posts
Showing posts with label Devops. Show all posts

Saturday, June 4, 2022

Dependency Tree in Maven

Dependency management is a core feature of Maven. Managing dependencies for a single project is easy.

Managing dependencies for multi-module projects and applications that consist of hundreds of modules is possible.

Maven avoids the need to discover and specify the libraries that your own dependencies require by including transitive dependencies automatically.

This feature is facilitated by reading the project files of your dependencies from the remote repositories specified.

In general, all dependencies of those projects are used in your project, as are any that the project inherits from its parents, or from its dependencies, and so on.

A project's dependency tree can be filtered to locate specific dependencies.

Maven is powered with Dependency plugin by default.With this plugin, we can have a better understanding and control over the list of dependencies used in a specific project.The plugin comes with several goals.

mvn dependency:tree  is the Command

The main purpose of the dependency:tree goal is to display in form of a tree view all the dependencies of a given project.


Wednesday, May 26, 2021

Confluent Hub Client

Confluent Hub is an online repository for extensions and components for Kafka. Kafka is based on extensible model for many of its services. 

It allows plug-ins and extensions which makes it generic enough to be suitable for many real world streaming based applications. They include both Confluent and 3rd party components. 

Generally, you would go there to search for components including:

  • Connectors
  • SMT (Single Message Transforms)
  • Converters
Installing Connectors from Confluent Hub:

The enterprise version of Confluent provides a script for installing Connectors and other components from Confluent Hub (the script is not included in the Open Source version).

If we're using the enterprise version, we can install a connector using the following command:

$CONFLUENT_HOME/bin/confluent-hub install confluentinc/kafka-connect-mqtt:1.0.0-preview

More no of connectors can be found under below link:

Tuesday, May 25, 2021

Access Bitbucket using python

Bitbucket is a Git-based source code repository hosting service owned by Atlassian.

Bitbucket Server is a combination Git server and web interface product written in Java and built with Apache Maven.

It allows users to do basic Git operations  while controlling read and write access to the code. It also provides integration with other Atlassian tools.

Now a days for reporting and for ETL operations, demand for python is increasing.

Using Python request library we can access bitbucket.

Below is sample code snippet.

import requests
import json
import pandas as pd
import io
url = 'https://api.bitbucket.org/2.0/repositories/Abcd'
headers = {'Content-Type': 'application/json'}
USERNAME = 'xxxxxx'
PASSWORD = 'yyyyyy'
response = requests.get(url, auth=(USERNAME, PASSWORD), headers=headers)
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers,
'Error Response:', response.json())
exit()

df = pd.read_json(io.StringIO(response.text))
# j = a.assigng().assigng()

dk = pd.json_normalize(df['values'])
dk.to_excel('Bitbucket_INC_Report.xlsx', sheet_name='SLA_Report', index='False')

Sunday, May 16, 2021

Install and Use Gremlin in a Docker Container

Gremlin is a simple, safe and secure way to use Chaos Engineering to improve system resilience. You can use Gremlin with Docker in a variety of ways.

It is possible to attack Docker containers and it is also possible to run Gremlin in a container to create attacks against the host or other containers.

• Create a Gremlin account: https://www.gremlin.com/demo/

• Login to the Gremlin App using your Company name and sign-on

credentials.

• Identify “TeamID” and “Secret Key” by navigating to

Settings>>TeamSettings>>Configuration

• Issue below command to in docker to pull the official Gremlin Docker image

and run the Gremlin daemon.

docker run -d --net=host \

--cap-add=NET_ADMIN --cap-add=SYS_BOOT --cap-add=SYS_TIME \

--cap-add=KILL \

-v $PWD/var/lib/gremlin:/var/lib/gremlin \

-v $PWD/var/log/gremlin:/var/log/gremlin \

-v /var/run/docker.sock:/var/run/docker.sock \

-e GREMLIN_TEAM_ID="$GREMLIN_TEAM_ID" \

-e GREMLIN_TEAM_SECRET="$GREMLIN_TEAM_SECRET" \

gremlin/gremlin daemon

• Use docker ps to see all running Docker containers:

       sudo docker ps

• Jump into your Gremlin container with an interactive shell

     sudo docker exec -it <gremlin container_id> echo “Running”

• From within the container, check out the available attack types:

      gremlin help attack-container

Saturday, May 15, 2021

Microsoft Restricted Free Usage of Azure DevOps Pipeline

 Microsoft has changed the policy to allow the free tier of a hosted agent pools for public and private projects of newly created DevOps organizations by citing the reason that many are abusing this feature by sending a huge amount of traffic on these hosted agents pools. 

Due to this reason, many are getting the following error during the build pipeline.

No hosted parallelism has been purchased or granted. To request a free parallelism grant, please reach out to azpipelines-freetier@microsoft.com with your name and organization name.



Microsoft comes with the some temporary alternative approach until the automate is implemented to grant the permission for those users who requires the free hosted agent pools. 

Now users needs to be drop the email to get the free tier access based on the project visibility types which is explained below

Private Project

To get the free tier access of Microsoft hosted agent pools for private projects, send the email to azpipelines-freetier@microsoft.com with the following details.

Public Project

To get the free tier access of Microsoft hosted agent pools for public projects, send the email to azpipelines-ossgrant@microsoft.com with the following details.
  • Your Name
  • Microsoft azure DevOps organization name (dev.azure.com/yourorganization)
  • Link of the repositories you want to build
  • Short description of your project.

Thursday, May 13, 2021

What is Azure Devops?

 Azure DevOps is also known as Microsoft visual studio team services (VSTS) is a set of collaborative development tools built for the cloud. 

VSTS was commonly used as a standalone term, and Azure DevOps is a platform which is made up of a few different products, such as:

  • Azure Test Plans
  • Azure Boards
  • Azure Repos
  • Azure Pipeline
  • Azure Artifacts

The azure pipeline is the CI component of azure DevOps. 

An azure pipeline can connect to any number of source code repositories such as Azure Repos, GitHub, Tests, to grab code and artifacts for application delivery.

Azure Boards:

Azure board is basically used for planning, tracking for Work Items, Backlog, and Sprints. Azure Board is also called an ALM (application life cycle management) tool just like other ALM tools JIRA, SpiraTeam, etc. Inside the Azure board, we can create workflows, issue types, epic, and many other components of ALM.

Azure Repos:
Azure Repos is a place where we can create, manage and store versions of our codebase. We can use it for code review and other version control functionality.

Azure Repos basically provides us two types of version control systems now GIT (Distributed version control) and TFVC (centralized version control).

Azure pipeline:

Azure pipeline is one of the automation processes of DevOps which allows us to create a build, release, test, and deploy on the target machine. We can say in simple words Azure pipeline is the combination CI/CD.

Azure Artifacts:

Azure Artifacts is one of the extensions of Azure DevOps which helps us to create, host, manage and share packages across the team. Azure Artifacts supports multiple types of packages e.g NPM, Nuget, Maven, Python, etc.. Azure Artifacts are basically a collection/ output of dll, rpm, jar and many other types of files. One of those extension based files may contain metadata.

Azure test Plans:

Azure test plans support advance test management solutions for customer feedback, UAT, Automation testing, and manual testing.

Tasks That Can be performed on Azure Devops:

What can Azure DevOps do?

  • Planning
    As discussed above, the Azure board is an ALM tool. So Azure board helps us to maintain an Agile process which means Azure DevOps helps us to plan, track development history for individual developers and detects issues using a kanban and scrum. 

  • Develop
    To manage our workspace and code repository multiple types of version control are already integrated with Azure DevOps. Almost all features related DevOps already inbuilt inside.

  • Integration/Build
    We can use continuous integration and continuous deployment (CI/CD) with the help of the Azure pipeline. There are multiple types of job agents, and templates are already available inside. 

  • Package
    As we know we have already an inbuilt extension available, Azure Artifacts, so we don't need to go anywhere for any type of packages.

  • Testing
    As we know Testing is an important part of the DevOps process. So Azure DevOps provides multiple types of inbuilt testing templates and management tools.

  • Release
    As we know the software release process is a very critical moment of DevOps. So Azure DevOps has already multiple types of inbuilt features to make the process easy and risk-free.

  • Configure/Operate
    As per my experience, it's very easy to configure and operate, because Azure DevOps has a simple process to configure organization, kanban, sprints, etc..

  • Monitor
    Each phase of the DevOps life cycle should be monitor because it helps to ensure the health, performance, and reliability of your application.

  • Reporting
    Azure DevOps has a dashboard where we can see reports, a summary and can generate an analytical report with BI integration. 

  • Wiki
    Azure DevOps Wiki helps us to distribute information, share knowledge and collaborate across team and stakeholder. A Wiki can be used to explain about the project's user stories, sprints, release notes, etc..

Sunday, May 9, 2021

Dockerfile

Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image

The Docker build command executes the Dockerfile and builds a Docker image from it.

 A Docker image typically consists of:

  • A base Docker image on top of which to build your own Docker image.
  • A set of tools and applications to be installed in the Docker image.
  • A set of files to be copied into the Docker image (e.g configuration files).
  • Possibly a network (TCP / UDP) port (or more) to be opened for traffic in the firewall..etc

Dockerfile Structure

A Dockerfile consists of a set of instructions. Each instruction consists of a command followed by arguments to that command, similar to command line executables. 

Here is a simple example of a Dockerfile:

# The base image

FROM ubuntu:latest

# More instructions here that install software and copy files into the image.

COPY    /myapp/target/myapp.jar    /myapp/sriniapp.jar

# The command executed when running a Docker container based on this image.

CMD echo Starting Docker Container

Refer: https://www.youtube.com/watch?v=QLENvSMSD-0

Saturday, May 1, 2021

JIRA Rest API using Python

The Jira REST API enables you to interact with Jira programmatically. Use this API 
to build apps, script interactions with Jira, or develop any other type of 
integration.
The URIs for resources have the following structure:
https:///rest/api/3/

There are so many libraries like JIRA,request to access the JIRA services.
  

In the below code snippet, i had used request library and to save the results used
Pandas library.
Pandas in the best library to save the results to Excel workbook.

Pandas library can be installed using either PIP/PIP3/easy-install. Make sure you 
are having right SSL certificate's before install pandas in intranet.

pip install pandas  
  
Sample Code Snippet:
  
import requests
from requests.auth import HTTPBasicAuth
import json
import pandas as pd

pd.set_option('display.max_columns', 500)
# url = "https://ABCD.atlassian.net/rest/api/3/issuetype"
url = "https://ABCD.atlassian.net/rest/api/3/search"

auth = HTTPBasicAuth("xxxxxxx", "yyyyyyyyyyyyyy")

headers = {
    "Accept": "application/json"
}
query = {
    'jql': 'project = ABCD'
}

response = requests.request(
    "GET",
    url,
    headers=headers,
    params=query,
    auth=auth
)
resdata = json.dumps(json.loads(response.text))
# print(json.dumps(json.loads(response.text), sort_keys=True, indent=4, 
separators=(",", ": ")))
if response.status_code != 200:
    print('Status:', response.status_code, 'Headers:', response.headers,
          'Error Response:', response.json())
    exit()
else:
    print('Server Connected.Please wait.....')

#for fetching all sprints data
df = json.loads(resdata)
maxresults = df['maxResult']
print(df['total'])
writer = pd.ExcelWriter('Jira_issues.xlsx', engine='xlsxwriter')
dk.to_excel(writer, sheet_name='Jira_Issues', index=False)
writer.save()

Wednesday, December 23, 2020

Docker for IBM Z

 Docker Enterprise Edition for Linux 17.06 on IBM Z is available directly from IBM and their network of channel partners. 

Companies who already have a mainframe footprint can extend their existing partnership with IBM to add Docker EE to their technology stack for their critical applications.

Additionally, IBM recently announced a new release of the enterprise Linux platform designed specifically for mainframe systems. 

This next generation of IBM LinuxONE systems and Docker EE together provide complementary security capabilities from the system to the application and the ability for massive scale – up to two million containers per system.



Wednesday, October 28, 2020

Docker Content trust

Docker Content trust When transferring data among networked systems, trust is a central concern.

In particular, when communicating over an untrusted medium such as the internet, it is critical to ensure the integrity and the publisher of all the data a system operates on.

Docker Content Trust(DCT) provides strong cryptographic guarantees over what code and what versions of software are being run in your infrastructure.

When a publisher using Docker Content Trust pushes an image to a remote registry, Docker Engine signs the image locally with the publisher’s private key.

When the user later pulls this image, Docker Engine uses the publisher’s public key to verify that the image is exactly what the publisher created, has not been tampered with, and is up to date.

There are 4 major steps to setup DCT.

1)Generate Docker Content Trust Key

2)Add the Signer to the Docker Repository

3)Sign the Image

4)Enable Content Trust at the Docker host

By following above 4 steps we can set DCT.



Thursday, October 22, 2020

Failure Injection Testing

Fault Injection is a technique for enhancing the testing quality by involving the intentional faults in the software. 

Fault injection is often in stress testing and it is considered as an important part of developing robust software.

Fault injection Methods:

Compile-Time Injections - It is a fault injection technique where source code is modified to inject simulated faults into a system.

Run-Time Injections - It makes use of software trigger to inject a fault into a software system during run time. The Trigger can be of two types, Time Based triggers and Interrupt Based Triggers.

Tools used for Software Fault Injection:

Following are the tools used for fault injection purposes:

BStorm Beyond Security==> http://www.beyondsecurity.com/

The Mu Service Analyzer Mu Dynamics==>www.mudynamics.com

Holodecksecurity Innovation==>www.securityinnovation.com

Xception Critical software==>http://www.criticalsoftware.co

Friday, October 9, 2020

Monitoring Jenkins with Java Melody

Jenkins is a self-contained, open source automation server which can be used to automate all sorts of tasks related to building, testing, and delivering or deploying software.”

Jenkins is written in Java (making it an ideal candidate to be monitored by Instana) and can be extended via plugins. 

There are thousands of plugins, created and maintained by the community, that provide a range of functionality from unit testing, to security, to compliance reporting.

There is a monitoring plugin for Jenkins that provides a lot of data about what’s going on within Jenkins and about the tasks being performed by Jenkins. The Jenkins monitoring plugin relies on JavaMelody which is a basic metric charting tool. 

JavaMelody is an open source application used to monitor Java or Java EE application servers.

It measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. 

There are also robust reporting options that allow data to be exported in either HTML of PDF formats.

For Demo, Please refer below video:




Thursday, September 24, 2020

Best Practices of QA in a DevOps

  • QA and testing teams should be part of technical teams. They have to move beyond manual functional testing and should focus their efforts on automation and testing strategies. QA becomes the enabler of quality across SDLC. 
  • Quality needs to be well-defined in order to meet requirements. Rather than trying to achieve perfection in software, QA should move towards focusing on accomplishing satisfying user experience within the given time frame. 
  • Metrics that measure quality have to be laid down and measured. Such metrics should detect software defects early in the development cycle. 
  • Goals of individual and teams need to be optimized. Organizations need to strengthen necessary behavior and cultural shift by incentivizing quality assurance. 
  • Requirements need to be specific. QA teams should proactively involve in the requirements process to help and guide development teams towards proper direction. 
  • QA & testing should focus more on automated regression testing on critical areas such as key software functionality. 
  • QA & testing has to move towards leveraging automation tools to automate testing wherever possible. 
  • Development, operations, and QA teams should be facilitated and encouraged to communicate, collaborate, and optimize their efforts. 
  • Continuous integration is the key to identifying defects early in the development lifecycle. For continuous integration to be implemented effectively, all the stakeholders have to integrate their work often on a day-to-day basis. 
  • A fully automated continuous testing process needs to be integrated into the SDLC for a successful continuous delivery process that minimizes risks, reduces costs, and accelerates time to market by frequent releases. 

Thursday, September 3, 2020

CodeSmells-SonarQube

CodeSmells are structures in code that violate design principles and negatively impact quality.

 Here are some of the bad smells in Java code. 

Constant Interface:Constant interfaces have only static final data members declared in them without any methods. Suggested refactoring for this smell depends on kind of constants present in the constant interface: the constants can get added as members in the class or can be rewritten as enums.

 The interface java.io.ObjectStreamConstant is an example for this smell. 

Global Variable Class:The class has one public static (non-final) field. Since its freely available for modification by anyone, it becomes an equivalent to a C-like global variable, with only difference that the variable name is prefixed by the class name, thus avoiding name-clashes. Here is an example:

class Balls {

public static long balls = 0;

Global Function Class:It is a public class that has only one static public method and no other fields or methods; it can have an optional private constructor to disallow instantiation. It is equivalent to a C-like global function, except that the function needs to be prefixed by the class name to access the function. 

Publicly Exposed Fields:It is a public class with public non-final, non-static data members and no methods (with an optional constructor). It is difficult to maintain public, C-like classes, as Effective Java notes: "Several classes in the Java platform libraries violate the advice that public classes should not expose fields directly. .

Forgotten Interface:A class implements all the methods with the same signatures as the methods listed in an interface. It is a likely mistake that the class intended to implement an interface, but forgot to list the interface as its base type. A consequence of this smell is that the objects of the class cannot be treated as subtype of the interface and hence the benefit of subtyping and runtime polymorphism is not exploited. 

Clone Class: A Clone class is an exact replica of another class (unrelated by inheritance). Essentially, only the name of the class is different, but all its members, their signature, accessibility, etc. are the same..

Saturday, July 25, 2020

Kong Api Gateway

Kong Api Gateway
Kong Gateway is an open-source, lightweight API gateway optimized for microservices, delivering unparalleled latency performance and scalability.

Kong can be deployed, in a variety of configurations, as an edge API gateway, as an internal API proxy, or even as a sidecar in a service mesh configuration.



Key Concepts in Kong:

  • API Object – wraps properties of any HTTP(s) endpoint that accomplishes a specific task or delivers some service. Configurations include HTTP methods, endpoint URIs, upstream URL which points to our API servers and will be used for proxying requests, maximum retires, rate limits, timeouts, etc.
  • Consumer Object – wraps properties of anyone using our API endpoints. It will be used for tracking, access control and more
  • Upstream Object – describes how incoming requests will be proxied or load balanced, represented by a virtual hostname
  • Target Object – represents the services are implemented and served, identified by a hostname (or an IP address) and a port. Note that targets of every upstream can only be added or disabled. A history of target changes is maintained by the upstream
  • Plugin Object – pluggable features to enrich functionalities of our application during the request and response lifecycle. For example, API authentication and rate limiting features can be added by enabling relevant plugins. Kong provides very powerful plugins in its plugins gallery
  • Admin API – RESTful API endpoints used to manage Kong configurations, endpoints, consumers, plugins, and so on

There are two Kong modalities:
  • Community Edition: Kong’s CE version boasts a comprehensive range of functionalities, including open-source plugin support, load balancing and service discovery, bud does not include a management panel. Therefore, we will need to configure Kong via REST, or using an open-source dashboard, such as Konga or Kong Dashboard.
  • The Enterprise Edition features expanded out-of-the-box functionalities, such as the management dashboard, security plugins, metrics and 24×7 support, to name a few.
Kong Download Link : https://konghq.com/kong/itm_source=website&itm_medium=nav

Tuesday, July 21, 2020

Akka Toolkit

Akka is a free and open-source toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM.

Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Er-lang.

Akka handles all of the underlying threading complexities by allowing you to just focus on dividing your tasks into actors, defining messages to pass among actors, and wiring together their communication logic.

The real power of Akka is realized when you distribute actors across multiple machines.

Akka folows Actor Model.

In the actor model, the actor is represented as an individual entity. 
Some characteristics of the actor are as follows:
  • Actor encloses the state and application logic
  • Actor interacts only using messages.
  • Every actor has an exclusive address and mailbox through which he can receive messages from others.
  • Mailbox messages are processed by the actor in consecutive order.
  • Tree hierarchy is used to represent the actor system.
Benefits of using the Actor Model:
The following characteristics of Akka allow you to solve difficult concurrency and scalability challenges in an intuitive way:
  • Event-driven model — Actors perform work in response to messages. Communication between Actors is asynchronous, allowing Actors to send messages and continue their own work without blocking to wait for a reply.
  • Strong isolation principles — Unlike regular objects in Java, an Actor does not have a public API in terms of methods that you can invoke. Instead, its public API is defined through messages that the actor handles. This prevents any sharing of state between Actors; the only way to observe another actor’s state is by sending it a message asking for it.
  • Location transparency — The system constructs Actors from a factory and returns references to the instances. Because location doesn’t matter, Actor instances can start, stop, move, and restart to scale up and down as well as recover from unexpected failures.
  • Lightweight — Each instance consumes only a few hundred bytes, which realistically allows millions of concurrent Actors to exist in a single application.
Akka Actors:
  • Ask Pattern
  • Ask Pattern mapTo
  • Ask Pattern pipeTo
  • Actor Hierarchy
  • Actor Lookup
  • Child actors
  • Actor Lifecycle
  • Actor PoisonPill
  • Error Kernel Supervision
Akka Routers:
  • RoundRobinPool
  • ScatterGatherFirstCompletedPool
  • TailChoppingPool
  • BroadcastPool
Akka Dispatchers:
  • Akka Default Dispatcher
  • Akka Lookup Dispatcher
  • Fixed Thread Pool Dispatcher
  • Resizable Thread Pool Dispatcher
  • Pinned Thread Pool Dispatcher

Akka HTTP: 

As the name implies, this module is typically best suited for middle-tier applications which require an HTTP endpoint. As an example, you could use Akka HTTP to expose a REST endpoint that interfaces with a storage layer such as a database. For additional information, you can refer to the official Akka documentation on Akka HTTP.


Akka Streams:

This module is useful when you are working on data pipelines or even stream processing. For additional information, you can refer to the official Akka documentation on Akka Streams.


Akka Networking:

This module provides the foundation for having actor systems being able to connect to each other remotely over some predefined network transport such as TCP. For additional information, you can refer to the official Akka documentation on Akka Networking.


Akka Clustering:

This module is an extension of the Akka Networking module. It is useful in scaling distributed applications by have actors form a quorum and work together by some predefined membership protocol. For additional information, you can refer to the official Akka documentation on Akka Clustering.


For Creating a Simple Application,use the below link

https://developer.lightbend.com/start/?group=akka&project=akka-quickstart-java




 

Thursday, July 9, 2020

Configure New Relic

NewRelic is an APM (Application performance management) tool, it helps in monitoring the JVM, databases, transactions, external services, etc.

New Relic has an agent, which is a small piece of code that sits inside the web application and watches what the web page code is building while it’s building web pages.

 The agent measures how long the code takes to build the web page and reports it back to the user. It informs the user of the time taken for a page to load and specifies if any factors are delaying the process.

It displays the load time for users all across the globe accessing the web application and it follows it all the way down, right to the code.

So the user will be able to determine if the longer load time is caused by something in your server, code, network, or in the browser, etc.

Command: java -javaagent:newrelic\newrelic.jar -jar yourapplication.jar

Key features of New Relic include:

  • Key business transaction monitoring
  • External services performance monitoring
  • Availability and error monitoring
  • End-to-end user monitoring through browser support
  • Synthetic monitoring
  • Alerts and notifications
  • Native integration with third-party tools such as Slack and PagerDuty

A sample demo can be viewed from my youtube channel:



 

 


Tuesday, July 7, 2020

Introduction to Heroku

Heroku is a Platform as a Service (PaaS), delivering tools that enable software development. Heroku, as a PaaS, allows business to quickly deploy, build, manage, and scale enterprise-level applications while bypassing infrastructure headaches normally required to host an enterprise quality application.
 It acts as the middle-man between hosting/infrastructure and Salesforce.

Advantages
  • Easy setup - as a PaaS you don't need to know how to install and configure Apache, nginx, unicorn, passenger, MySQL, Postgres etc
  • Easier to scale initially - spin up more dynos, size up DBs etc
  • Great plugin support for third party apps
Disadvantages
  • Pricey - after the free tier the pricing is steep, you will pay literally 3x - 5x more than a comparable performing setup through an IaaS
  • Performance - as highlighted by rap genius recently the switch away from being purely a Rails PaaS and the updates to the routing engine has worse performance which translates to more dynos and still higher costs
  • Lock-In - since you aren't managing your infrastructure you can't move it to take advantage of savings
  • Single point of failure - granted it happens rarely but it's there. Architecting around this on your own requires a lot of experience but you'll be glad once you get past the learning curve.
We can perform operations on heroku by simply creating an account using below link.


Sunday, July 5, 2020

Vault by HashiCorp

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates.

Vault can be used when we are working on below Concepts:
1)Database credentials 
2)API keys for external services
3)credentials for service-oriented architecture communication.
4)Securing Configuration files
...etc

Key Features of Vault:
  • Secure Secret Storage: Arbitrary key/value secrets can be stored in Vault. Vault encrypts these secrets prior to writing them to persistent storage, so gaining access to the raw storage isn't enough to access your secrets.
  • Dynamic Secrets: Vault can generate secrets on-demand for some systems, such as AWS or SQL databases. For example, when an application needs to access an S3 bucket, it asks Vault for credentials, and Vault will generate an AWS keypair with valid permissions on demand. After creating these dynamic secrets, Vault will also automatically revoke them after the lease is up.
  • Data Encryption: Vault can encrypt and decrypt data without storing it. This allows security teams to define encryption parameters and developers to store encrypted data in a location such as SQL without having to design their own encryption methods.
  • Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.
  • Revocation: Vault has built-in support for secret revocation. Vault can revoke not only single secrets, but a tree of secrets, for example all secrets read by a specific user, or all secrets of a particular type. Revocation assists in key rolling as well as locking down systems in the case of an intrusion.
Vault Installation:
Vault can be installed from the below link.




Wednesday, June 24, 2020

Ansible Tower

Ansible Tower(<https://www.ansible.com/products/tower>) by Red Hat is an enterprise framework for managing your Ansible automation that provides a UI and restful API.
It provides role-based access control, graphical inventory management, and job scheduling.
It provides real-time output of playbook runs and is fully compatible with the major cloud environments.
Ansible Tower features:
  • Role-based access control: you can set up teams and users in various roles. These can integrate with your existing LDAP or AD environment.
  • Job scheduling: schedule your jobs and set repetition options
  • Portal mode: this is a simplified view of automation jobs for newbies and less experienced Ansible users. This is an excellent feature as it truly lowers the entry barriers to starting to use Ansible.
  • Fully documented REST API: allows you to integrate Asible into your existing toolset and environment
  • Tower Dashboard: use this to quickly view a summary of your entire environment. Simplifies things for sysadmins while sipping their coffee.
  • Cloud integration: Tower is compatible with the major cloud environments: Amazon EC2, Rackspace, Azure.
Ansible Tower is supported by the following operating systems:
  • Red Hat Enterprise Linux 6 64-bit
  • Red Hat Enterprise Linux 7 64-bit
  • CentOS 6 64-bit
  • CentOS 7 64-bit
  • Ubuntu 12.04 LTS 64-bit
  • Ubuntu 14.04 LTS 64-bit
  • Ubuntu 16.04 LTS 64 bit
  • 64-bit support required (kernel and runtime) and 20 GB hard disk.
  • Minimum 2 GB RAM (4+ GB RAM recommended) is required.
  • 2 GB RAM (minimum and recommended for Vagrant trial installations
  • 4 GB RAM is recommended /100 forks


ES12 new Features