domain model – How to manage bad third-party APIs in a microservices architecture?

I am currently transforming a monolithic application into an architecture based on microservices. The monolith depends on third-party services (as in other departments) for its data. Most of these third-party services are accessible via SOAP while some use REST.

Some of these third-party services have terrible (or even unusable) IMO APIs. It adds a lot of unnecessary complexity and master key to our aggregator service (which is a monolithic atm). I'm trying to map the domain of one of these third party APIs to a usable domain in an ACL so I can offer a decent API to our aggregation service.

However, that got me thinking. Is it even the right way to do it? It's a lot of work, should I bite the bullet and use these awful APIs? How do you manage the terrible third-party APIs in a microservices architecture that makes your service less maintainable?

Thanks in advance!

microservices – I am running a decentralized service discovery system

I'm doing a decentralized service discovery system and load balancing with distributed systems written in Node.js. My main idea is to create a faster, simpler and more complex alternative than other centralized systems like Hashicorp Consul which involve load balancers and name resolution servers. This system is focused on medium or small projects that wish to apply a microservices architecture without too many complications

The architecture of this system is based on a decentralized node architecture where each node has in memory a record of the location and state of the other nodes which are in its same cluster.

All the advantages of this system can be read in the Readme file of the repository.

I have yet to write many technical details and documentation, but I think the main idea is clear.

I would like you to share your criticisms with me. Your negative reviews will be extremely positive for me and for whether I should continue working on this project.

Github repository

Thank you!

google application engine – Development paradigm for GAE microservices

We are building microservices which we will probably deploy using GAE and I am relatively new to GAE. I have done a lot of other developments in my day, but this paradigm is a little different. I wonder if anyone can provide advice on the most common methodology for developing services deployed using GAE. Specifically, do developers tend to do their coding locally, test the functionality, and then deploy to GAE to test everything, or do they, do they make a small change to their code, and do they really push it to GAE to test it? It seems like we build the basic functionality of the service first locally, then once it works, pass it and send it to GAE to make sure it works as it looks like it would be more difficult to debug when running on GAE. I'm just wondering what typical model developers follow when creating services to deploy on GAE.

microservices – The architecture of the project is a Micro Services architecture?

Is the architecture of my project a Micro Service architecture?

In my project, we start the day with a SOD script that does some early morning functionality. then it generates an event upon completion which triggers a few additional processes which begin loading the data. and such a chain process continues to run sometimes by generating events or sometimes by status updates in the database.

are all processes independent services developed in different languages? Is this type of architecture classified as micro-service architecture

architecture – Is there a way to generate a dependency graph for my microservices?

I have a general question which is nothing specific.

I was wondering if there was a tool or method that we could use to recover (get) all the dependencies of a micro-service architecture. I did my research online, but some of the tools I find are strict for a particular setting, mainly Spring Boot.

Is there a general method (a means, a tool) that could do this regardless of your programming language or the framework used to create your micro-services. Something you would run on your back-end and it would generate a graph with all of your micro-services showing which micro-service is using which other local and remote services?

architecture – Microservices for application with custom API without imposing traffic load

Let's say you have an app like Facebook, where each Publish can be labeled with a Place. Now the entire backend API of the social application (essentially the entire client API) is built using nodejs + postgres. But Places autocomplete is a custom API that is built in Golang for example.

Since the Places API is essentially based on a stateless database (using postgres) as it only stores information which should not be manipulated by the user, it makes sense to put it in its own micro-service.

So it makes sense to have the following architecture:

Service A – the main client's API / backend. In this service, I can like a message, follow a friend and post new messages.

Service B – will have all the information about the places and the API. This means that it will have tables with cities and countries and will expose an API to retrieve this information.

So, if a user posts a new message in London, department A will manage this action and create a record in their database inside the "Messages" table, where one of the columns will have the 39; London ID (found in the service table B).

Now the next time I want to get this message, I will only have the place id, but obviously we would like to show the information of this place (city name, country name, etc.) .).

This means that for an endpoint of "getPost (id = 2)" in service A, we will have to join the Places tables in service B. And that's the problem. Microservices should ideally not have intercommunication between them which would constitute a load of unwanted traffic. Frankly, I don't even know how and if it's even technically plausible.

The alternatives would be to have a monorepo with the Places in Go project, and the Main project in nodejs, with the same DB, or with 2 databases.

I am not able to properly weigh the pros and cons and likelihood of these alternatives, and would like to understand what is usually done in cases like this?

** P.S – Whether it is Monorepo architecture or micro-service, I intend to use Docker + Kubernetes.

microservices – Difference between PID and HTTP integrity checks

(Assuming PID means process id here, I don't know Cloud Foundry.)

HTTP-based verification of service processes in a container verifies that the process is running AND responds to requests.

PID-based verification simply tests whether a process with the same process identifier exists, but it cannot verify that the process is performing its designated tasks. The process could be stuck in an infinite loop or deadlock, preventing it from responding to requests.

I don't know what strategies are recommended in Cloud Foundry to manage these processes. When I have similar situations in Docker containers, I often run a separate internal health check process in the container and use some sort of heartbeat in the process on duty to report that he is still alive. Of course, it depends on the specific application.

azure – Architecture for the application of microservices

I don't know if this is a good forum to ask questions about infrastructure architecture. But by asking the question hoping:

One of my clients has a web application which is developed in the latest micro services technology. Kubernetes is the underlying layer. And in addition, they use CDN, API hosting, etc. Now, from the perspective of the public cloud (azure or AWS), how can I design the infra here? I have a few questions regarding the services they use. For simplicity, I'm going to talk about Azure POV. It has been decided to use the following components of Azure:

Azure CDN, Azure application gateway, Azure FrontDoor.

I am confused about the call flow with these services. From the client (like the web browser), when there is a request for the application, ideally, static content should be answered by Azure CDN and other dynamic content by checking the container or the server . So, this is what I guess about the call flow:

Browser -> Azure Front Door -> Application Gateway -> API Management Microservice -> Other Microservices -> Azure CDN -> Browser

Is it correct? If not, can you guide me to understand better architecture. Any help would be really appreciated.

c # – Microservices and third party APIS

I recently joined a team that is trying to use the "microservice" model for their new application. they have already started implementing APIs. in the end, it should be an API for the mobile and web user interface. but I feel they are doing it so badly. things that don't make sense to me:

  • they put each "microservice" in a git repository plus a solution file (.sln they use dotnet).
  • since all microservices are interleaved, they cannot be deployed independently
  • they tried to use the "gateway model" but in the gateways they make http calls to other "microservices".
  • they implemented a "common" project (class library) to all requests and responses. (all projects refer to this project)
  • for each third-party API, they implemented a "microservice" to consume it. (think of it like; the mobile makes a request to a gateway, then the gateway proxies that request a microservice, then this microservice makes a "real" http call to a third party), it's the same thing for the web user interface part.

i don't feel like they're doing things right. but I don't know where to start. I tried to trace a request but it did not go well. there are tons of master code in the solution file. the bad thing is that there is a deadline and they don't want to participate for a big refactor.

what should be the next step? should we stop and refactor all of this?

Thank you.

rest – Implementation of the RESTful API in front of event-based microservices

I am working on a system that implements several microservices that communicate via a RabbitMQ messaging bus.

  • These microservices are created using python with the pika library (to publish messages as well as consume a RabbitMQ queue)
  • One of these microservices (let's call it "commands") has a connected database to store the data

Until now, the application components are asynchronous, relying entirely on RabbitMQ exchanges / queues for communication and, if necessary, implementing queues waiting for a reminder when a microservice needs to request data from another.

Now that I have talking backend microservices, I would like to implement a RESTful API for this “ commands '' microservice. so that customers (e.g. web browsers, external applications) can receive and send data.

I can think of two ways to do this:

  1. Create another microservice (let's call it "orders-api") in something like a vial and connect it to the underlying database behind the "orders" microservice. This seems like a bad idea because it breaks the microservice model to have only one database connected to a single microservice (I don't want two microserices to have to know the same data model)

  2. Create an "api-gateway" microservice that exposes a RESTful API and, upon receipt of a request, requests information from the "orders" microservice via the messaging bus. Similar to the way RabbitMQ documents remote procedure calls here: https://www.rabbitmq.com/tutorials/tutorial-six-python.html. This would mean that the "api-gateway" would be synchronous, and therefore, would hang while waiting for a response on the messaging bus.

I don't know if there are other ways to achieve this that I don't know about. Any suggestion on how to integrate a RESTful API in this environment would be appreciated!