amazon rds – Storing Analytic Data for Multi-Tenet SaaS with AWS Aurora

I have an app that a user can upload an excel sheet of analytics data to S3. I am wanting to trigger a Lambda function on upload to do some data processing and then write the analytics to a client’s organization’s database (I am using Aurora). Eventually we will be capturing live clickstream data but for now we are just using generated reports.

My question is is this best practice to just have all the analytics in a database with thousands and thousands of events in as many rows? I can see that your table supports a maximum of … a little over 4.29 billion rows. but does that mean I can just pile them into a giant table until then? If I am potentially getting 50k rows per month should I just not think worry about it until I see a performance hit (if I even ever see one)? Or am I, as a newbie, just worrying over nothing?

Ideally I don’t just want to make this thing work, I want to learn how to make something that lasts and scales. And reading the docs on Aurora it sounds like this shouldn’t be an issue but I don’t know if I am just not seeing something that will become an issue.

Thanks for any advice and feedback!

Mount AWS S3 on boot while behind proxy using s3fs

In order to get a mount to work I have to first set the http_proxy environment variable before I run s3fs.

export https_proxy=<proxy address>
s3fs -d <bucket> mount-point -o url= -o use_path_request_style

I’d like to mount my S3 bucket on boot but I am behind a proxy. I can add it to fstab, but it doesnt work because it doesnt know about the proxy.

amazon s3 – Using AWS to capture and store data (Python)

i have one script(Python) that get data from an specific web page, do the ETL and save the archive as .CSV on my desktop.

I need to put this code inside AWS Amazon (I believe it must be AWS Glue), set a trigger to do this operation 00:00 everyday and store the collected data on the AWS s3.

Someone here already did something like that? Im struggling to do it, but im unsucessfull until now. Every direction that some of you can provide is accepted.

AWS Elastic Beanstalk – WordPress SEO friendly URL’s not working ( with Nginx )

we deployed application with ElastiC Beanstalk ( wordpress ) home page working fine but internal SEO friendly URL not working i pushed .htaccess but then i found its NGINX. any one has any idea why SEO friendly URL’s not working & how we can make this work.


Why do I have a reauth=1 redirection loop when I try to log in to WordPress hosted on AWS Fargate?

I have set up an AWS Fargate cluster with ten WordPress containers behind an AWS Application Load Balancer (ALB).

I tested that my settings work when I have one container. Why can I not log in when I have ten containers?

amazon web services – AWS Comprehend + Pyspark UDF = Error: can’t pickle SSLContext objects

When applying a Pyspark UDF that calls an AWS API, I get the error

PicklingError: Could not serialize object: TypeError: can't pickle SSLContext objects

The code is

import pyspark.sql.functions as sqlf
import boto3

comprehend = boto3.client('comprehend', region_name='us-east-1')

def detect_sentiment(text):
  response = comprehend.detect_sentiment(Text=text, LanguageCode='pt')
  return response("SentimentScore")("Positive")

detect_sentiment_udf = sqlf.udf(detect_sentiment)

test = df.withColumn("Positive", detect_sentiment_udf(df.Conversa))

Where df.Conversa contains short simple strings.
Please, how can I solve this? Or what could be an alternative approach?

AWS SSl, Domain & Name Server Propagation

So my boss, said “I don’t know, what company you were working for before. But DNS settings doesn’t take 24 hours to take effect since when I started this company. “

I’m having a doubt with myself if I’m doing this AWS thing right but

Here’s my key words

  1. EC2 Instance
  2. Elastic IP
  3. Load Balancer
  4. Route 53
  5. DNS A Record
  6. DNS Nameserver

As of now, I tried all of these key words. Since changing nameserver may take few hours to take effect, mostly 24 hours.


DNS A record takes 4-8hours before it take effect.

Now I waited almost 1 day just to finish this and it works fine.

I know that changing nameserver may take some time but is there another way on how to implement SSL on my ec2 instance without waiting this too much? Just curious with the aws. Probably some of you will down vote this question. Since I’ve already done it.. But just wanted to know more deeply if there’s possible way of applying ssl on Elastic IP of the instance without using Route 53 or should I say without waiting too much.

amazon web services – AWS Golden Image from VMware to AMI

I would like to know if it is a good idea to create an AWS CentOS gold image in VMware and then export it to AWS (AMI) format using the AWS Import / Export?

I ask this question because AWS can modify the CentOS kernel parameters for their infrastructure and when you export it to AMI, you risk losing this optimization.

Name AWS Resources – Information Security Stack Exchange

For S3 bucket names, anyone can query a bucket name (without specifying an account owner) and see if the bucket is misconfigured to allow public access.

Therefore, it probably makes sense to use a complicated name to add an extra level of security, if the compartment settings should be accidentally changed in public.

Are there other resources in AWS where a similar level of caution could be applied to the naming? (i.e. where anyone can query the resource settings with just the name of the resource)

AWS CloudWatch doesn't invoke my lambda

A few days ago, I hit my cloudwatch and it suddenly stopped working.

I've tried creating cloudwatch and lambda rules again, but it still doesn't work.

I tried everything, but nothing solves it.

When you test with lambda, it works fine.

Anyone know a solution?

thank you,