Cloudflare Complains About Amazon AWS Egress Fees

cloudflare logoIn a blog post this week, Cloudflare complained that Amazon AWS’s egress fees are too high.  According to CF’s calculations, customers are paying:

  • 80 times Amazon’s costs in the US, Canada, and Europe
  • 21 times Amazon’s costs in South America
  • Between 3.5 times and 17 times Amazon’s costs in Asia

As they put it, “Amazon stands alone in not passing on savings” and notes that Amazon AWS has declined to be part of the Bandwidth Alliance, in which major providers substantially discount egress charges between mutual Cloudflare customers.

Industry-wide, over the last 10 years, wholesale bandwidth costs are 93% cheaper than they were in 2011.  However, Amazon AWS’s costs is only 25% cheaper.

There’s a lot more analysis in the post, which is a very interesting read and emphasizes the point that big cloud players are not the right fit for all users. Paying per-gigabyte for network traffic is enormously expensive and in many cases, buying a VM package with bundled bandwidth is more cost-effective.

raindog308

I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.

Como resolver erro de roteamento com Serverless e AWS Lambdas e RESTful APIs?

Eu tenho as seguintes rotas RESTful:

GET /products/
GET /products/{sn}
GET /products/{sn}/cycles
GET /products/{sn}/cycles/{id}
GET /products/types 

que estão configuradas da seguinte maneira no serverless.yml:

fnGetProduct:
    name: ${self:provider.stage}-${self:custom.fnGetProduct}
    handler: src/products/fnGetProduct.fnGetProduct
    events:
    - http:
        path: products
        method: GET
        private: true
        cors: true
        authorizer: ${file(env.yml):${self:provider.stage}.authorizer}
    - http:
        path: products/{sn}
        method: GET
        private: true
        cors: true
        authorizer: ${file(env.yml):${self:provider.stage}.authorizer}
        request:
            parameters:
              paths:
                sn: true
    - http:
        path: /products/{sn}/cycles
        method: GET
        private: true
        cors: true
        authorizer: ${file(env.yml):${self:provider.stage}.authorizer}
        request:
            parameters:
              paths:
                sn: true
    - http:
        path: /products/{sn}/cycles/{id}
        method: GET
        private: true
        cors: true
        authorizer: ${file(env.yml):${self:provider.stage}.authorizer}
        request:
            parameters:
              paths:
                sn: true
                id: true
    - http:
        path: /products/type
        method: GET
        private: true
        cors: true
        authorizer: ${file(env.yml):${self:provider.stage}.authorizer}

Perceba que todas as rotas redirecionam para um único arquivo.função fnGetProduct onde eu filtro os recursos da seguinte maneira:

const resource = event.resource

if (resource === '/products') {
  return await function for /products path 

} else if (resource === '/products/{sn}') {
  return await function for /products/{sn} path

} else if (resource === '/products/{sn}/cycles') {
  return await function for /products/{sn}/cycles/{id} path

} else if (resource === '/products/{sn}/cycles/{id}') {
  return await function for /products/{sn}/cycles/{id} path

} else if (resource === '/products/types') {
  return await function for /products/types path
}
... 

O problema é que a URL /products/aghdj deveria ser capturada na rota /products/{sn} e isso está funcionando. Mas a rota /products/types também está sendo capturada em /products/{sn} pois types está sendo reconhecido como o {sn}. Como eu resolvo esse comportamento?

amazon web services – Can i run docker compose on an already exisiting AWS EC2 instance?

I currently have a t4g.medium AWS EC2 instance with my clients application on an Apache server but i have been working on migrate to Docker more specifically docker compose, all the apps are running on my local computer and are production ready but i cannot find if EC2 supports Docker compose without using ECS.
I have been researching but cannot a definitive answer and i don’t know what to do, i don’t want to use ECS because i already have a savings plan with this instance and a lot of experience with, is there a way to use docker compose or to force ECS to use my existing EC2 instance, thanks

aws – How should the server architecture of a service look like that stores files from a desktop application in the cloud (S3/Cloud storage)?

I developed a desktop application and I am in the process of adding support for online cloud storage. The main requirement is to allow the user to store files in the cloud while being able to delete them locally to save space (this is not possible via Dropbox nor Google Drive).

My initial idea is to setup a server with Nginx that accepts incoming connections and forwards them to a webservice by acting as a forward proxy.

If the incoming request is a download/upload the query is redirected to the S3/GCS server. I want to avoid a direct connection to the S3/GCS container. Is this a suitable architecture?

TLDR: How should an architecture look like where a desktop application can send files to a custom cloud server.

Dropbox and Google Drive are not suitable for my workflow as they don’t allow to delete a file locally but keeping them in the cloud. Files and directories are always synced.

Independent Cloud Virtual Server Hosting Deal! | Intel OPTANE NVME | 90% Cheaper Than AWS,


Why Choose Farbyte for Your VPS Hosting?

Trusted

At Farbyte we’ve been providing industry-leading internet hosting services since 2006!

On top of that, Farbyte is a fully legal UK limited company with our own infrastructure located in ISO 27001 & 9001 accredited data centres.

As a full RIPE member we’ve our own IPv4/6 ranges!

About Farbyte!
See client reviews!!

Cloud VPS Infrastructure

Your business is important to you and your data is your most valuable digital asset.

At Farbyte we understand this & have created a VPS hosting platform focused not only on performance, but uptime too!

Every VPS is stored on our triple-replicated, distributed cloud storage platform.

OpenVZ or KVM

Choose either OpenVZ containers or full KVM virtual machines.

Our OpenVZ containers are slightly cheaper, but still hosted on our private UK cloud.

With KVM Infrastructure as a Service (IaaS) you get a full hypervisor VPS & your own virtual infrastructure with free VPS firewall for every VPS.

KVM IaaS also boasts optional VPS snapshots & private unmetered network VLAN between your VPS.

Support

Our support is 2nd to none.

Highly trained staff are on hand to help you all the way.

You can contact us via our support ticket or live chat systems.

Bargain Cloud Virtual Machines & Infrastructure!

At Farbyte we provide premium featured cloud VPS hosting at a fraction of the price.

You can get started today for around $8 / month!!!

Features included with all plans:

  • FREE – bandwidth with all VPS
  • FREE – reverse DNS
  • FREE – server / webiste migration
  • Public IPv4 & IPv6 addresses
  • Triple-replicated, distributed storage
  • Automated server failover (cloud)
  • Isolated hosting environment
  • Defined resource allocations (RAM, CPU, disk IO, etc.)
  • Managed & Unmanaged VPS
  • No contracts
  • Simple, easy billing
  • Flexible upgrades & downgrades
  • 99.99% uptime
  • ISO 9000/27001 data centers
  • 30 Day Guarantee
  • + more.

Cloud OpenVZ VPS – starts £6.60 / month

  • OpenVZ – FREE DNS hosting
  • OpenVZ – instant deployment

Cloud KVM VPS click here – starts £8.75 / month

  • KVM – FREE VPS firewalls
  • KVM – snapshots (optional)
  • KVM – unmetered private VLAN (optional)
  • KVM – Virtual infrastructure nearly 90% cheaper than AWS, Google & Azure!

Learn more about our UK VPS here! – from £6.60 / month (~ $8 / month)

If you’ve any questions, please contact us at:
https://farbyte.uk/members/contact.php

Join us on social media:
Facebook
Twitter
YouTube

mysql – High CPU usage in AWS RDS

When I visit woocommerce orders page in wordpress, mysql RDS CPU usage goes to 100% but the website is working perfectly fine.
In ‘active sessions’ section, “wait/io/tables/sql/handler” is showing cpu usage to 99%.
I looked at the performance insights of the database and saw this strange query:

SELECT SQL_CALC_FOUND_ROWS `hbm_posts` . * , `low_stock_amount_meta` . `meta_value` AS `low_stock_amount` ,
 MAX ( `product_lookup` . `date_created` ) AS `last_order_date` FROM `hbm_posts` 
LEFT JOIN `hbm_wc_product_meta_lookup` `wc_product_meta_lookup` ON `hbm_posts` . `ID` = 
`wc_product_meta_lookup` . `product_id` LEFT JOIN `hbm_postmeta` AS `low_stock_amount_meta` 
ON `hbm_posts` . `ID` = `low_stock_amount_meta` . `post_id` AND `low_stock_amount_meta` .
 `meta_key` = ? LEFT JOIN `hbm_wc_order_pro

This query is not even complete and I can’t execute it in mysql shell. I have tried tracing back the query in wordpress, but can’t find it anywhere. I searched it with ‘string locator’, saw query logs in ‘query monitor’, tried disabling all plugins and also tried “define(‘SAVEQUERIES’, true);” as stated in this post:

https://stackoverflow.com/questions/4660692/is-it-possible-to-print-a-log-of-all-database-queries-for-a-page-request-in-word

What can I do to trace back this?
Mysql version of server and client is 5.7.34

amazon – When using MTurk with AWS billing activated is there any way to place a cap on how much money can be spent within some specified timeframe (eg 1 day)?

This is currently no solution to place a cap on how much money can be spent within some specified timeframe (e.g., 1 day/week/month) when using MTurk with AWS billing activated.

Here’s what I found: https://www.mturk.com/help

It doesn’t really say that you can place a spending cap on the account. The thing is, if you post a HIT, you have to be able to pay the workers, so it looks like it’s a matter of limiting the number of HITs you post or limit the amount you spend on a HIT.

If you don’t approve HITs and authorize payment within 30 days, they are automatically paid. If you put a spending cap on your AWS billing and there isn’t enough to pay for the HITs, that would cause problems with MTurk.

javascript – Why is the fetch request to AWS Express server failing?

I have upload an express app to an AWS Elastikbeanstalk instance. When I enter the production url into chrome it returns a JSON with the data I want. Here is that code:

const express = require("express");
const axios = require('axios');
const app = express();
const cors = require('cors');
const port = process.env.PORT || 5000;

app.use(cors());

app.listen(port, () => {
    console.log("listening on " + port);
})

app.get("https://wordpress.stackexchange.com/", (req, res) => {
    let login = process.env.USER
    let pass = process.env.PASS
    
    axios({
        method: 'get',
        url: process.env.URL,
        auth: {
            username: login,
            password: pass
        }
    }).then(function (response) {
        res.send(response.data);
        
      })
      .catch(function (error) {
        console.log(error);
        return;
      });
    
    
})

So just one express route that gets a JSON from an API using axios. Again when I enter the production URL into chrome or test this locally it works.

Then I make a fetch request from my wordpress front end to the production URL all I get is

TypeError: Failed to fetch

after a few second delay. Here is that fetch request:

//Load cards onlick with API
function wpb_hook_javascript() {
    ?>
        <script>
         function loadCards(){
             console.log("click");
             
                    
             fetch('<The-URL-of-AWS-SERVER', {
                 method: 'GET'
             })
                .then(response => console.log(response))
                //.then(data => console.log(data))
                .catch((error) => {
                    console.error('Error:', error);
                });
         }
        </script>
    <?php
}
add_action('wp_head', 'wpb_hook_javascript');

What am I doing wrong?

Website down, then demolished – AWS hosting

Hi,

I have a simple php script, hosted at AWS.

I made some changes to it (php modification), and checked it always afterwards.

It a… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1852009&goto=newpost

Amazon AWS S3 Unrestricted File Upload

While I was pentesting a web application, I found out that files that are uploaded to the web application are stored in an AWS S3 instance. Based on my experience, when a web application needs to store all types of files, including files with potential malicious extensions (.php, .exe, .js and etc.), they will not allow the AWS S3 instance to view/execute the file content on the server, so they will automatically download the file instead of running it. Surprisingly, the AWS S3 instance is configured to view/execute all types of files. So, when I tried uploading a .html file, the HTML tags are executed. Other than uploading .html files and creating my own HTML page, what are other security concerns/exploits that can be done through the misconfigured AWS S3 Instance?

Additional Information: I was able to upload .html and .js file and execute it on the AWS S3 instance

Regards,