AWS routing between AZs

There’s three subnets:

  • subnet A on AZ-A, 10.0.1.0/24
  • subnet B on AZ-B, 10.0.2.0/24
  • subnet C on AZ-C, 10.0.3.0/24

There’s one server on subnet A (10.0.1.50) answering to ping performed via Site-to-Site VPN.

Is it somehow possible to route pings to this one and only server also when pinging to 10.0.2.50 or 10.0.3.50?

ssh – Are AWS EC2 key-pairs compromised when an EC2 instance is compromised?

Suppose I have multiple EC2 instances deployed with the same key-pair. The key-pair is used for SSH access and general troubleshooting. If one instance is compromised, do I need to be concerned about the key-pair allowing access to the other instances?
What kind of cryptographic primitive is used for EC2 key-pairs?

apache – WordPress in AWS Lightsail – restrict public IP

I have WordPress instance in AWS Lightsail with static public IP. I also created DNS record with domain to redirect to that IP.

How can I hide/restrict public IP? Meaning I can now go to just that IP but I don’t want public to be able do it, only to domain. I have cert installed for domain but not for IP.

user goes to -> https://www.hello.com/ (1.1.1.1) – ALLOW

user goes to -> http://1.1.1.1/ – restrict

Run a PHP File once every day using cron job on Aws Lightsail Bitnami WordPress Installation

I’m using a custom php script to fetch popular posts based on jetpack postviews and write results to a static php file.

Now I want to run this script only once every day by setting up the cron job in my bitnami wordpress installation on Aws lightsail,

I’m NOT good at working with commands, can someone just tell me which commands do I have to execute to do setup a cron job?

I’ve already tried what is mentioned in the official bitnami documentation and few other resources, without any success.

Thanks

network – Isolation AWS resources with multiple subnets vs multiple VPCs

I have AWS resources (e.g. EC2s, RDS instances) that I would like to isolate from each other so that if one is compromised, the potential damage is limited. I am most concerned about data leakage / exfiltration. I can group these resources into logical “areas”. Some of the resources need access to the public internet. Some of the resources need API access to other resources in different areas. Occasionally, developers will need to make SSH connections to the resources via OpenVPN, so those keys might also be a security risk.

My understanding is that I can split my resources in a few ways:

  • A single VPC and a single subnet with communication controlled by security groups (I understand this is not recommended, buy why?)
  • A single VPC with multiple subnets and controlled communication between them
  • Multiple VPCs each containing multiple subnets, with controlled communication between them

What are the security implications of each approach?

Can we store mangoodb in AWS server?

Please suggest anyone if know

Determining when to use Serverless vs Containerized application (AWS Lambda vs ECS) – Is Java Spring dead?

I work for an organization that heavily leverages AWS. There is a strong push that every team move from containers deployed on ECS to leverage AWS Lambda and step functions for (almost) every project. I know that there are workflows for which lambdas are the best solution, for example if you are running infrequent, short duration processes or processing S3 uploads for example. However I feel like my project isn’t a great use case for them because:

  1. We have many calls to a database and I don’t want to have to worry about having to re-establish connections because the container a lambda was running in isn’t available anymore.

  2. We have many independent flows which would require too many lambdas to manage efficiently. With each new lambda you create you have to maintain an independent deployment pipeline and all the bureaucratic processes and items that go with owning a deploy-able component. By limiting the number of these the team can focus on delivering value vs maintenance.

  3. We run a service that needs to be available 24/7 with Transactions Per Second around 10 to 30 around the clock. The runtime for each invocation is generally under 10 seconds with total transactions for a day in the 10’s of thousands.

Also generally, I’m not bought into the serverless ecosystem because of a few pain points:

  1. Local development. I know the tooling for developing AWS Lambdas on a developer machine has gotten much better, but having to start all these different lambdas locally with a step function to test an application locally seems like a huge hassle. I think it makes much more sense to have a single Java Spring Boot application with a click of a button you can test end to end and debug if necessary.

  2. Reduced Isolation. If you have two ECS clusters and one is experiencing a huge throughput spike, the other ECS cluster will not be impacted because they are independent. Not so for lambda. We’ve seen that if other lambdas are using all the excess provisioned concurrency and we have to go over our reserved concurrency limit, then we are out of luck and we’ll be rate limited heavily leading to errors. I know this should be a niche scenario, but why risk this at all? I think the fact that lambdas are not independent is one of things I like least about this ecosystem.

Am I thinking about lambdas/ serverless wrong? I am surrounded by developers who think that Java and Spring are dead and virtually every project must be built as a go/python lambda going forward.

@Mods if there are any ways that I can make this question more appropriate for the software engineering stack exchange community or rephrase it, I’m happy to make changes here as well.

Here’s some links to research I’ve done so far on the topic:

  1. https://stackoverflow.com/questions/52275235/fargate-vs-lambda-when-to-use-which
  2. https://clouductivity.com/amazon-web-services/aws-lambda-vs-ecs/
  3. https://www.youtube.com/watch?v=-L6g9J9_zB8

database – What data storage services in AWS provide strong consistency?

I am trying to draw a comparison of data storage solutions provided by Amazon AWS and one of the main aspects of such analysis I’m trying to address is consistency vs performance/throughput, as that is a very much important aspect for the purpose of my investigation. I know that DynamoDB, RDS and S3 (much recently) all offer strong consistency options, being DynamoDb the winner when it comes to huge scale.

I also learned that AWS elasticache has the WAIT command that increases the odds of you reading a just written record but that doesn’t make it strong consistent.

I wonder if there are other options in AWS that also provide strong consistency storage that can challenge dynamo db in throughout, cost or any other relevant dimensions.

digital signature – It’s securely to store private keys in AWS Secret Manager?

I’m implementing a service that makes signs and sends transactions at the end of the day, this acts as a crypto exchange. The service creates for every new user a key pair (Private key with his public key). The user can deposit his funds to his respective public key (this is a hot wallet and he has no access to the private key of it) and the service must send these funds to a cold wallet at the end of the day. I’m looking for a solution to store these private keys safely using AWS infrastructure.

Reading the description of AWS Secret Manager, it says that it can store API keys, DB Credentials, or tokens OAuth, but it doesn’t mention crypto private keys.

I found other solutions using Hashicorp, but if I prefer a solution using any AWS solutions.

So, the questions are:

  1. It’s safe to store Private keys in AWS KMS or Secret Manager?
  2. If is not, is there a solution using AWS?