Encryption – What is Cloud KMS? What is his purpose / advantage of KMS? How it works? How can I use it? (AWS KMS, KMS GCP, Azure Key Vault)

What is the purpose / benefit of KMS?

  1. The KMS prevents the leakage of decryption keys, similar to an HSM, but HSMs are expensive and difficult to use. KMSs are inexpensive and easy to use because they have API endpoints.
  2. KMS shifts the problem of access control to encrypted data from a decryption key management problem (where granular access to impossible access and the ability to revoke it) is replaced by a identity and access management problem (where ACLs can be used to easily manage access, grant granular access, etc. and revoke access.)
  3. Increased auditability and control of access to encrypted data.

Give me a concrete example of a problem that KMS solves and has the advantage of using KMS:

KMS allows you to securely store encrypted secrets in git, so as to avoid leakage of decryption keys. You can control access to encrypted secrets at a specific level and revoke access without having to modify encrypted files.

What is Cloud KMS? How it works?

KMS is an encryption technique that corrects symmetric, asymmetric and HSM encryption faults. This is the basis of future encryption techniques such as encryption anchors.

Abrupt evolution of cryptography

  1. Symmetric encryption keys:
    • The long password is used for both encryption and decryption.
  2. Pairs of public-private key of asymmetric encryption:
    • The public key encrypts the data, the private key decrypts the encrypted data with the public key.
  3. HSM (hardware security modules):
    • Make sure the private key is not disclosed.
    • HSMs are expensive.
    • HSMs are not user friendly or automation.
  4. KMS Cloud (Key Management Services):
    • KMS is a trusted service that encrypts and decrypts data on behalf of customers. It essentially allows a user or machine to encrypt and decrypt data using their identity rather than encryption / decryption keys. (A client authenticates with a KMS, which verifies its identity against an ACL .If it has decryption rights, it can send encrypted data in a request to the KMS, which then decrypt them on behalf of the client and send the decrypted data to the client through a secure TLS tunnel.)
    • KMS are cheap.
    • KMS are exposed via the REST API, which makes them easy to use and automate.
    • KMS are extremely secure, they allow to spend a decade without leaving a key decryption.
    • The invention of the KMS encryption technique introduced 3 deadly features:
      1. When responding to a known violation:
        Before KMS decryption keys are disclosed: You can not revoke a decryption key, which means that you need to rotate multiple decryption keys, re-encrypt all data with the new keys, and try your best to purge the keys. old encrypted data. While doing all of this, you will have to struggle with management to get permission to cause downtime for multiple production systems, minimize downtime, and even if everything is well done, you may not be able to completely purge old encrypted data, as in the case of git history, and backups.
        After KMS, the identity information that has been disclosed is disclosed: the identity information can be revoked, it is useless. The nightmare of re-encrypting the data and purging the old encrypted data disappears. You must always rotate the secrets (identification information as opposed to decryption key), but the act of rotation becomes economical enough to be automated and planned as a preventative.
      2. The management of encrypted data goes from an impossible task involving distributed decryption keys to a trivial task of managing a centralized access control list. It is now possible to easily revoke, edit and assign granular access to encrypted data; and, as a bonus, since the KMS Cloud, IAM, and SSO federations integrate, you can leverage pre-existing user identities.
      3. Cryptographic anchoring techniques become possible:
        • Network Access Control Lists can be applied to the KMS so that data can only be decrypted in your environment.
        • KMS decryption rates can be monitored for a baseline. When an abnormal rate occurs, alerts and a rate limit can be triggered.
    • KMS decryption keys can be secured by an HSM.
    • The leakage possibilities of the decryption keys are practically nil because the clients do not interact directly with the decryption keys.
    • Cloud computing providers can afford to hire the best security professionals and implement the costly business processes necessary to keep key systems as secure as possible. Thus, the possibilities of leakage of the main keys are also almost zero.

How to use KMS?

  • Mozilla SOPS is a tool that encapsulates / summarizes the KMS, it is ideal for securely storing encrypted secrets in git.
  • Helm Secrets Plugin encapsulates Mozilla SOPS to allow you to securely store encrypted Kubernetes yubls in git, and then, when to apply them, the secret values ​​are decrypted transparently at the last minute, just before they happen. pass into a TLS tunnel encrypted directly on kube-apiserver, then Kubernetes can use KMS to encrypt Kubernetes secrets again, so that they are encrypted in a database etcd.
  • You can use it with any tool, independent of the cloud, click here to learn more.

amazon aws – Restrict Cognito user access to cloudfront files backed up by s3

What is the easiest way to limit access to distributed cloudfront files that are stored in S3 to unauthenticated users via cognito?

Specifically: I have files stored in S3 which I want to speed up a little distribution, so I use a cloudfront distribution. Some files should only be accessible to authenticated users via cognito.

The Web and the documentation seem to be full of remarks on how to access cloudfront on third party servers via signed URLs, which can seem very heavy. However, my case is very simple. I want users to authenticate via cognito, then limit their access to files hosted on S3. So it seems manageable. However, as soon as Cloudfront enters the game, things become incredibly difficult. Why?

linux – Unable to download .p12 certificate from a local Mac on a remote crawl server on AWS ec2 – bitnami

I know how to download the .p12 cert for iOS push notifications – via Heroku or Back4App.
However, my Parse server is hosted on an ec2 AWS instance with a Bitnami image.
Therefore, I can only interact with my server via my device.
I have tried to download the PFX (.p12) from my local machine via scp something like this:

scp -i /Path/To/My/Certificates.p12 ubuntu @ server_ip: / home

but I receive the following error in the terminal:

Load key "/Path/To/My/Certificates.p12": Invalid format ubuntu @ server_ip: permission denied (publickey).

Unfortunately there is poor documentation on how to download p12 files without GUI, like Back4App / Heroku.

I would be so very happy if someone could help me on this.

T.I.A

amazon web services – AWS IoT Mobile App

Novice IoT here … I am looking to create a mobile app to communicate with any of my devices with the help of AWS IoT. Before I started creating the application, I thought I would ask experts for advice. Does AWS IoT have an integrated mobile app that I can reuse or should I create an app from scratch? I would really appreciate if anyone could share the whole process (not the AWS IoT communication but the steps to create a mobile app for AWS IoT)

Amazon Web Services – HTTPS only works with the Load Balancer DNS – AWS

I have a problem with HTTPS configuration on AWS, I hope you can help.

What I already have:

  1. EC2 – with elastic IP, open port capture with security group.
  2. Load Balancer attached to EC2 (with the same security group as EC2).
  3. SSL Certificate of AWS (ACM)
  4. Field – "Transferred", from another service (not Amazon) using only Elastic IP for DNS configurations. (Can this be the problem?)
  5. Route53 – configured for Field with AWS (SSL) and for the IPV4 address, I use an alias for Load Balancer.

How it works:

  • EC2: Elastic IP and Public DNS (only work for http) as it should work, I suppose.
  • LOAD BALANCER: Works and gives HTTPS and HTTP just access from the DNS name.
  • Route53 (Domain) – Works only for HTTP. Each HTTPS request returns ERR_CONNECTION_REFUSED.

How do i explain this problem for myself (if there is a problem, thank you for telling me this)
So if Field is redirected from the DNS settings to Elastic IP which is attached to EC2, there can not be an SSL connection because I use ACM for the SSL certificate, then LOAD BALANCER comes to get help for log on Field and SSLbut as in Areas DNS, there is no connection with LBS, AWS can not give SSL only Route53 configurations?

Will this solve the problem if I change the elastic IP address of EC2 in Domain DNS with the public DNS name of Load Balancer?

Need migration and git configuration from C9 to C9 AWS

I am a PHP developer working on a Lumen infrastructure project and I was using the online IDE C9 (c9.io) PHP, which supports server configuration and all. Just choose PHP, Bitbucket and automatically configure the server environment with Integration of git to bitbucket and AWS CodeCommit (during validations, it is downloaded in both zones).

C9 was bought by Amazon. The AWS platform (Amazon Web Services) is similar, but different. All pre-AWS servers and services are removed at the end of the month.
SEMrush

They have a migration tool that simply groups your PHP code and puts it on an EC2 server so you can use AWS C9 on it, but there is no simple Bitbucket integration, nor a simple "Run my code" button Where I To get a link and see the changes on the fly, it takes a special configuration to get an external link that I can not do.

I am not a server administrator and configure AWS C9 for it to work as C9 is a little beyond me. I need someone who understands my needs here to go from my C9 configuration as described above to AWS C9 which uses an EC2 instance (with PHP 7.2, default installation) and when validating , grow to both gits, BitBucket and AWS CodeCommit. I need a public SSL link on which the code will run so that it's easy and quick to see the changes.

There is no database to migrate or configure, the project connects to Amazon's RDS.

My budget is open to reasonable offers.

Amazon Web Services – The S3 File Is Not Equal After Countdown in AWS Athena

I have a file stored in S3 with a number of 33,09,1073. When counting through a query in AWS Athena, the number is already 33 196 272 and we discovered that the surplus lines were all empty. We try to match it, but the delete command in AWS Athena does not work. Any suggestions on how to remove extra blank lines?

Configuring the AWS RDS Read Replica on Magento

My Magento database hosted on AWS MySQL RDS works at 100%. I have created a read replica so that the read request can be served by it. But I do not know how to configure it.

Does anyone have any idea of ​​how to configure the AWS Read replica for Magento?

magento2 – AWS magento 2 questions

I've created an instance of magento 2 on AWS lightsail, using the official AWS image from magento.

1- As soon as I have finished creating it, the next screen says "you do not have a database, do you want to create one?", It's impossible to magento 2, is not it? When I used your image, a database was created, is not it?

2- Is the official image of magento 2 delivered with varnish and redis or do I have to configure them?

How to set environment variables for AWS CodeStar

I'm trying to set environment variables in different ElasticBeanstalk servers, created by CloudFormation, on which my application is deployed (that is, build, stage, production).

How should I do that with template.yml?