Decoupling refers to components remaining
unaware of each other as they complete their work for some greater output. This decoupling can be used to describe components that make up a simple application, or the term even applies on a broad scale.
A decoupled application architecture allows each component to perform its tasks independently - it allows components to remain completely autonomous and unaware of each other. A change in one service shouldn’t require a change in the other services.
Ultimately decoupling promotes scalability, as you can scale the pieces of your infrastructure that your capacity planning identifies to be bottlenecks. What’s more you can make those pieces redundant, increasing high availability at the same time.
What is Application Decoupling?
Decoupling an application basically refers to the process of splitting the application in smaller and independent components (often called microservices). The opposite of that approach is to maintain big monolithic applications.
What is Decoupled Architect?
Decoupled architecture is a type of computing architecture that enables computing components or layers to execute independently while still interfacing with each other. Decoupled architecture separates a system’s memory access and instruction cycle processes from execution-stage processes by implementing a data buffer.
Fully-managed message queuing service
Requires no administrative overhead and little configuration
Acts as a buffer between senders and receivers
Processes billions of messages per day
Stores all message queues and messages within a single, highly-available AWS Region with multiple redundant Availability Zones
Messages are stored until they are processed or deleted
Queues can be shared anonymously or with specific AWS accounts
Design for Async operation
Queue sharing can also be restricted by IP address and time of day
SSE encrypts messages as soon as Amazon SQS receives them, then protects them using keys managed in AWS KMS
Work queues: Decouple components of a distributed application that may not all process the same amount of work simultaneously
Batch operation buffer: Add scalability and reliability to your architecture and smooth out temporary volume spikes without losing messages or increasing latency
Request offloading: Move slow operations off of interactive request paths by queuing the request
Scaling trigger: Use Amazon SQS queues to help determine the load on an application, and when combined with Auto Scaling, you can scale the number of Amazon EC2 instances out or in, depending on the volume of traffic
Dead letter queue support: A dead-letter queue (DLQ) is a queue of messages that could not be processed. It receives messages after a maximum number of processing attempts has been reached. A DLQ is just like any other Amazon SQS queue. Messages can be sent to and received from it like any other SQS queue. You can create a DLQ from the Amazon SQS API and the SQS console.
Visibility timeout is the period of time that a message is “invisible” to the rest of your application after an application component gets it from the queue. During the visibility timeout, the component that received the message usually processes it and then deletes it from the queue. This prevents multiple components from processing the same message. When the application needs more time for processing, the “invisible” timeout can be modified.
Long polling is a way to retrieve messages from your Amazon SQS queues. The default of short polling returns immediately, even if the message queue being polled is empty. However, long polling doesn’t return a response until a message arrives in the message queue or the long poll times out. Long polling makes it inexpensive to retrieve messages from your Amazon SQS queue as soon as the messages are available.
Approciate use cases: Service to service communication, Async work items, State change notifications
Inapprociate use cases: Selecting specific message, large messages
It’s important to know when a particular technology won’t fit well with your use case. Messaging has its own set of commonly encountered anti-patterns. It’s tempting to have the ability to receive messages selectively from a queue that match a particular set of attributes or even match a one-time logical query.
For example, a service requests a message with a particular attribute because the message contains a response to another message that the service sent out. This can lead to a scenario where there are messages in the queue that no one is polling for and are never consumed. Most messaging protocols and implementations work best with reasonably sized messages (in the tens or hundreds of KBs). As message sizes grow, it’s best to use a dedicated storage system, such as Amazon S3, and pass a reference to an object in that store in the message itself.
Web service that makes it easy to set up, operate, and send notifications from the cloud
Uses the publish-subscribe (pub-sub) model (clients subscribe, and messages are pushed to them immediately upon being sent)
Clients subscribe to a “topic”
You control access to the topic by determining which publishers and subscribers can communicate with the topic
Instead of sending a specific destination address in each message, the publisher sends a message to the topic;
Amazon SNS then delivers the message to each subscriber of that topic
Each topic has a unique name
Subscribers receive all the same messages to the topics to which they subscribe
Topics can be encrypted; Amazon SNS uses customer master keys powered by AWS KMS
Messages stored in encrypted form are replicated across multiple Availability Zones for durability
Encrypted messages are decrypted just before being delivered to subscribed endpoints
Endpoints include Amazon SQS queues, AWS Lambda functions, HTTPS webhooks, and more
1. Email or Email-JSON - Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
2. HTTP/HTTPS - Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
3. SMS Clients - Messages are sent to registered phone numbers as short message service (SMS) text messages.
4. Amazon SQS queues - Users can specify an SQS standard queue as the endpoint; Amazon SNS will enqueue a notification message to the specified queue. Note that FIFO queues are not currently supported.
5. AWS Lambda functions - Messages can be delivered to AWS Lambda functions for handling message customizations, enabling message persistence or communicating with other AWS services.
Single published message
No recall options
HTTP/HTTPS retry can be controlled by an Amazon SNS delivery policy
Order and delivery not guaranteed