~
Transform business requirements to action, which usually involves:
After successfully do all of that, next step is:
In ZaloPay, each team has its own responsibilities/domains, aka many different services.
Ideally each team can choose custom backend techstack if they want, but mostly boils down to Java or Go. Some teams use Python for scripting, data processing, ...
Example: Team UM (User Management) has 10+ Java services and 30+ Go services.
The question is for each new business requirements, what should we do:
Example: Business requirements says: Must match/compare user EKYC data with Bank data (name, dob, id, ...).
Backend services talk to Frontend, and talk to each other.
How do they communicate?
First is through API, this is the direct way, you send a request then you wait for response.
HTTP
GRPC
There are no hard rules on how to design APIs, only some best practices, like REST API, ...
Correct answer will always be: "It depends". Depends on:
Why do we use HTTP for Client-Server and GRPC for Server-Server?
Second way is by Message Broker, the most well known is Kafka.
Main idea is decoupling.
Imaging service A need to call services B, C, D, E after doing some action, but B just died. We must handle B errors gracefully if B is not that important (not affect main flow of A). Imaging not only B, but multi B, like B1, B2, B3, ... Bn, this is so depressed to continue.
Message Broker is a way to detach B from A.
Dumb explain be like: each time A do something, A produces message to Message Broker, than A forgets about it. Then all B1, B2 can consume A's message if they want and do something with it, A does not know and does not need to know about it.
sequenceDiagram
participant A
participant B
participant C
participant D
A ->> B: do something
A ->> C: do something
A ->> D: do something
sequenceDiagram
participant A
participant B
participant C
participant D
A ->> B: do something
A ->> C: do something
A -x D: do something but failed
sequenceDiagram
participant A
participant B
participant C
participant D
participant Kafka
A ->> B: do something
A ->> C: do something
A ->> Kafka: produce message
D ->> Kafka: consume message
D ->> D: do something
Pro tip: Use proto to define models (if you can) to take advantage of detecting breaking changes.
You should know about DRY, SOLID, KISS, YAGNI, Design Pattern. The basic is learning which is which when you read code. Truly understand will be knowing when to use and when to not.
All of these above are industry standard.
The way business moving is fast, so a feature is maybe implemented today, but gets thrown out of window tomorrow (Like A/B testing, one of them is chosen, the other says bye). So how do we adapt? The problem is to detect, which code/function is likely stable, resisted changing and which is likely to change.
For each service, I often split to 3 layers: handler, service, repository.
Handler layer is likely never changed. Repository layer is rarely changed. Service layer is changed daily, this is where I put so much time on.
The previous question can be asked in many ways:
My answer is, as Message Broker introduce concept decoupling, loosely coupled coding. Which means, 2 functions which do not share same business can be deleted without breaking the other.
For example, we can send noti to users using SMS, Zalo, or noti-in-app (3 providers). They are all independently feature which serves same purpose: alert user about something. What happen if we add providers or remove some? Existing providers keep working as usual, new providers should behave properly too.
So we have send noti abstraction, which can be implement by each provider, treat like a module (think like lego) which can be plug and play right away.
And when we do not need send noti anymore, we can delete whole of it which includes all providers and still not affecting main flow.
Test is not a way to find bug, but to make sure what we code is actually what we think/expect.
Best case is test with real dependencies (real servives, real Redis, real MySQL, real Kafka, ...). But it's not easy way to setup yourself.
The easier way is to use mocks. Mock all dependencies to test all possible edge cases you can think of.
TODO: Show example
How to make code easier to test. Same idea loosely coupled as above.
Some tips:
Start with basic: getting data from database.
sequenceDiagram
participant service
participant database
service ->> database: get (100ms)
Getting data from cache first, then database later.
sequenceDiagram
participant service
participant cache
participant database
service ->> cache: get (5ms)
alt not exist in cache
service ->> database: get (100ms)
end
If data is already in cache, we can get it so fast (5ms), nearly instant. But if not, we hit penalty, must get database after then re-update cache if need (>105ms). The best case is worth even if hitting penalty sometimes.
Basic cache strategy: combine Write Through and Read Through
sequenceDiagram
participant service
participant cache
participant database
note over service,database: Read Through
service ->> cache: get
alt not exist in cache
service ->> database: get
service ->> cache: set
end
note over service,database: Write Through
service ->> database: set
service ->> cache: set