Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands.

SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

Benefits of using Amazon SQS

  • Security — You control who can send messages to and receive messages from an Amazon SQS queue.
  • Server-side encryption (SSE) lets you transmit sensitive data by protecting the contents of messages in queues using keys managed in AWS Key Management Service (AWS KMS).
  • Durability — For the safety of your messages, Amazon SQS stores them on multiple servers. Standard queues support at-least-once message delivery, and FIFO queues support exactly-once message processing.
  • Availability — Amazon SQS uses redundant infrastructure to provide highly-concurrent access to messages and high availability for producing and consuming messages.
  • Scalability — Amazon SQS can process each buffered request independently, scaling transparently to handle any load increases or spikes without any provisioning instructions.
  • Reliability — Amazon SQS locks your messages during processing, so that multiple producers can send and multiple consumers can receive messages at the same time.
  • Customization — Your queues don’t have to be exactly alike — for example, you can set a default delay on a queue. You can store the contents of messages larger than 256 KB using Amazon Simple Storage Service (Amazon S3) or Amazon DynamoDB, with Amazon SQS holding a pointer to the Amazon S3 object, or you can split a large message into smaller messages.

AWS SQS Features

Confidential secure data : The SSE or Server-side encryption facility under SQS enables transmitting highly secure, sensitive, and confidential data. It provides the Key management services as a protection to the messages present within the queues.

Less data wastage : The Amazon SQS makes sure the messages sent to the components are not lost and stay secure. This message data stores on different general/standard queues.

Message Locking facility : Amazon SQS helps in message locking when the receiver gets during the process. It makes the other producers send the message to the receivers securely. In case, the processing of messages get fail then the lock will expire and the message will be available again for processing.

Message retain : The message can be retained within the queues up to a maximum period of 14 days.

Queue Sharing : AWS Simple queue service offers queue sharing securely either anonymously or with a secure Amazon account. This sharing can put on restriction by the time it is shared in a day and also through the IP address of the system.

Unlimited data : Users can generate SQS queues using an unlimited message within any region.

Work Queues : It allows separating components of an allocated application. So that may not all process the same quantity of work in parallel.

Buffer and group Operations : It helps to add scalability and reliability to the user’s system architecture. Further, it smoothly deals with the data without losing messages or enhancing latency.

Request Offloading : It allows moving slow operations off for the interactive request paths by adding a queue of similar items for the request.

Also, Simple Queue Service provides high scalability and resiliency of the messages. It helps to save the messages from failure and process them securely.

Differences between Amazon SQS, Amazon MQ, and Amazon SNS

Amazon SQS and Amazon SNS are queue and topic services that are highly scalable, simple to use, and don’t require you to set up message brokers. We recommend these services for new applications that can benefit from nearly unlimited scalability and simple APIs.

Amazon MQ is a managed message broker service that provides compatibility with many popular message brokers. We recommend Amazon MQ for migrating applications from existing message brokers that rely on compatibility with APIs such as JMS or protocols such as AMQP, MQTT, OpenWire, and STOMP.

Pricing for Amazon SQS

Amazon SQS has no upfront costs. The first million monthly requests are free. After that, you pay based on the number and content of requests, and the interactions with Amazon S3 and the AWS Key Management Service.

Summary of AWS SQS

  • In the real world, generally we need to send a message to clients after doing something. For example, when a client orders something from some site, then after the order gets processed he will get a confirmation message. For this we have two servers, one which will process the order and second to send the message to the client.
  • So, after processing the order, the ordering server will pass a message to the message server which will then send it to the client. Our ordering server will only pass a message once, whether it is taken by message server or not.
  • In this scenario if the ordering server will pass a message to the message server and due to some issue if the server is down for some time then our message will be lost and due to this our client will not get the order processed message.
  • In this scenario, our ordering server is directly connected to the message server. This is known as Tightly Coupled.
  • Tightly Coupled applications or programs are the systems which are interdependent and directly connected due to which one system goes down, it affects the other system.
  • Another meaning of tightly coupled can be an application that manages different services in multiple OS to complete the workflow so that the result of one is given as input to the other system.
  • Decoupled Structure: When a server is not directly connected to another server So, to overcome this situation instead of tightly coupling them we use decoupled structure i.e. use some other application like Message Queue (MQ) between them.
  • MQ (Message Queue): is a queue of messages sent between applications.
  • A message is the data transported between the sender and the receiver application
  • MQ has some internal storage where it stores all data/messages and this storage box is known as Queue
  • MQ is a concept not a program and we have multiple products for implementing MQ.
  • The various opensource software used to implement the concept of the message queue are: Rabbit MQ, Apache MQ, Active MQ, Kafka, etc. — Now, our ordering server will pass the message to MQ and then our massaging server will take message from this MQ and send it to client. So, even if message server goes down for some time, MQ will maintain/stores that data/messages in a Queue and message server can take them when up again.
  • To remove tight coupling we use MQ as a middleware as it provides us limited functions so it is faster than Databases.
  • Middle Ware Programs act as intermediate in between some other programs. Here Message Queue Programs also comes in category of Middle Ware.
  • We can use a database such as MySQL between two Tightly Coupled Program but the problem is Databases provide us lots of functionalities and that makes the process slower.
  • SQS (Simple Queue Service): It is the AWS Managed Message Queue Service. It is also Serverless.
  • The minimum size that we can store in the SQS database is about 1 KB and the maximum size is 256KB.
  • The SQS database retains the data for a minimum time of 1 minute to a maximum time of 14 days.
  • Poll and Consume:

— Poll: Message server has to continuously come to MQ and ask for some messages for him.

— Consume: If MQ has data for the message server then the message server will fetch it and do its work.

  • Retention Time: MQ Program will never delete the message till Retention time. We have to program our consumer in a way that it will delete messages automatically as soon as it consumes. — While a consumer is processing a message in the queue, SQS temporarily hides the message from other consumers.
  • This is done by setting a visibility timeout on the message, a period of time during which SQS prevents other consumers from receiving and processing the message.
  • The visibility timeout begins when SQS hands over a message to the consumer. During this time, the consumer has to do two things:

1. It has to complete the processing of the message.

2. Delete the message from the queue. However, if the consumer fails before deleting the message, the visibility timeout will expire and the message becomes visible to other consumers for receiving. — Dead Message: As we have to write consumer program, so there can be some bugs. So, if due to any bug if consumer is unable to delete the message, then consumer will get same message multiple time due to continuous polling.

  • So, in this case MQ will see that this particular message is going multiple times, then MQ will tag that message as a Dead Message. No one can consume that Dead Message.
  • Dead messages are stored in some extra Queue known as DLQ (Dead Letter Queue). DLQ is used to terminate the loop of the same Message to Consumer.
  • Lambda is the (serverless) Function as a service provided by AWS used to implement Serverless Computing.
  • We can integrate SQS with Lambda by setting the Destination in Lambda for SQS. Lambda is a compute service that lets you run code without provisioning or managing servers. — AWS SQS is a message query service provided by amazon.

NASA Case Study

“We now have an agile, scalable foundation on which to do all kinds of amazing things. Much like with the exploration of space, we’re just starting to imagine all that we can do with it.”

Bryan WallsImagery Experts Deputy Program Manager, NASA

About NASA

Established in 1958, the National Aeronautics and Space Administration (NASA) has been working around the world — and off of it — for almost 60 years, trying to answer some basic questions: What’s out there in space? How do we get there? What will we find? What can we learn there, or learn just by trying to get there, that will make life better here on Earth?

Exploring Space: No Rocket Science Degree Needed

Have you ever looked up at night and wondered about the mysteries of space? Or marveled at the expansiveness of our galaxy? You can easily explore all this and more at the NASA Image and Video Library, which provides easy access to more than 140,000 still images, audio recordings, and videos — documenting NASA’s more than half a century of achievements in exploring the vast unknown. For NASA, providing the public with such easy access to the wonders of space has been a journey all its own.

NASA began providing online access to photos, video, and audio in the early 2000’s, when media capture began to shift from analog and film to digital. Before long, each of NASA’s 10 field centers was making its imagery available online, including digitized versions of some older assets.

Therein was the challenge: “With media in so many different places, you needed institutional knowledge of NASA to know where to look,” says Rodney Grubbs, Imagery Experts Program Manager at NASA. “If you wanted a video of the space shuttle launch, you had to go to the Kennedy Space Center website. If you wanted pictures from the Hubble Space Telescope, you went to the Goddard Space Flight Center website. With 10 different centers and dozens of distributed image collections, it took a lot of digging around to find what you wanted.”

Early efforts to provide a one-stop shop consisted of essentially “scraping” content from the different sites, bringing it together in one place, and layering a search engine on top. “In large part, those initial efforts were unsuccessful because each center categorized its imagery in different ways,” says Grubbs. “As a result, we often had five to six copies of the same image, each described in different ways, which made searches difficult and delivered a poor user experience.”

In 2011, NASA decided that the best approach to address this issue was to start over. By late 2014, all the necessary pieces for a second attempt were in place:

• The Imagery Experts Program had developed and published a common metadata standard, which all NASA’s centers had adopted.

• The Web Enterprise Service Technologies (WESTPrime) service contract, one of five agency-wide service contracts under NASA’s Enterprise Services program, provided a delivery vehicle for building and managing the new site.

• The Federal Risk and Authorization Management Program (FedRAMP), which provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.

“We wanted to build our new solution in the cloud for two reasons,” says Grubbs. “By 2014, like with many government agencies, NASA was trying to get away from buying hardware and building data centers, which are expensive to build and manage. The cloud also provided the ability to scale with ease, as needed, paying for only the capacity we use instead of having to make a large up-front investment.”

Decades of NASA Achievements — All in One Place

Development of the new NASA Image and Video Library was handled by the Web Services Office within NASA’s Enterprise Service and Integration Division. Technology selection, solution design, and implementation was managed by InfoZen (acquired by and now operating as ManTech International), the WESTPrime contract service provider. As an Advanced Consulting Partner of the AWS Partner Network (APN), ManTech International chose to build the solution on Amazon Web Services (AWS). “Amazon was the largest cloud services provider, had a strong government cloud presence, and offered the most suitable cloud in terms of elasticity,” recalls Sandeep Shilawat, Cloud Program Manager at ManTech International.

NASA formally launched its Image and Video Library in March 2017. Key features include:

• A user interface that automatically scales for PCs, tablets, and mobile phones across virtually every browser and operating system.

• A search interface that lets people easily find what they’re looking for, including the ability to choose from gallery view or list view and to narrow-down search results by media type and/or by year.

• The ability to easily download any media found on the site — or share it on Pinterest, Facebook, Twitter, or Google+.

• Access to the metadata associated with each asset, such as file size, file format, which center created the asset, and when it was created. When available, users can also view EXIF/camera data for still images such as exposure, shutter speed, and lens used.

• An application programming interface (API) for automated uploads of new content — including integration with NASA’s existing authentication mechanism.

Architecture

The NASA Image and Video Library is a cloud-native solution, with the front-end web app separated from the backend API. It runs as immutable infrastructure in a fully automated environment, with all infrastructure defined in code to support continuous integration and continuous deployment (CI/CD).

In building the solution, ManTech International took advantage of the following AWS services:

• Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable compute capacity in the cloud. This enables NASA to scale up under load and scale down during periods of inactivity to save money, and pay for only what it uses.

• Elastic Load Balancing (ELB), which is used to distribute incoming traffic across multiple Amazon EC2 instances, as required to achieve redundancy and fault-tolerance.

• Amazon Simple Storage Service (Amazon S3), which supports object storage for incoming (uploaded) media, metadata, and published assets.

• Amazon Simple Queue Service (Amazon SQS), which is used to decouple incoming jobs from pipeline processes.

• Amazon Relational Database Service (Amazon RDS), which is used for automatic synchronization and failover.

• Amazon DynamoDB, a fast and flexible NoSQL database service, which is used to track incoming jobs, published assets, and users.

• Amazon Elastic Transcoder, which is used to transcode audio and video to various resolutions.

• Amazon CloudSearch, which is used to support searching by free text or fields.

• Amazon Simple Notification Service (Amazon SNS), which is used to trigger the processing pipeline when new content is uploaded.

• AWS CloudFormation, which enables automated creation, updating, and destruction of AWS resources. ManTech International also used the Troposphere library, which enables the creation of objects via AWS CloudFormation using Python instead of hand-coded JSON — each object representing one AWS resource such as an instance, an Elastic IP (EIP) address, or a security group.

• Amazon CloudWatch, which provides a monitoring service for AWS cloud resources and the applications running on AWS.

An Image and Video Library for the Future

Through its use of AWS, with support from ManTech International, NASA is making its vast wealth of pictures, videos, and audio files — previously in some 60 “collections” across NASA’s 10 centers — easily discoverable in one centralized location, delivering these benefits:

• Easy Access to the Wonders of Space. The Image and Video Library automatically optimizes the user experience for each user’s particular device. It is also fully compliant with Section 508 of the Rehabilitation Act, which requires federal agencies to make their technology solutions accessible to people with disabilities. Captions can be turned on or off for videos played on the site, and text-based caption files can be downloaded for any video.

• Built-in Scalability. All components of the NASA Image and Video Library are built to scale on demand, as needed to handle usage spikes. “On-demand scalability will be invaluable for events such as the solar eclipse that’s happening later this summer — both as we upload new media and as the public comes to view that content,” says Bryan Walls, Imagery Experts Deputy Program Manager at NASA.

• Good Use of Taxpayer Dollars. By building its Image and Video Library in the cloud, NASA avoided the costs associated with deploying and maintaining server and storage hardware in-house. Instead, the agency can simply pay for the AWS resources it uses at any given time.

While NASA’s new Image and Video Library delivers a wealth of new convenience and capabilities, for people like Grubbs and Walls, it’s just the beginning. “We now have an agile, scalable foundation on which to do all kinds of amazing things,” says Walls. “Much like with the exploration of space, we’re just starting to imagine all that we can do with it.”

GDSC | IBM Z | GoogleCloudReady Facilitator | Dexterous Photographer | Quantum Computing Enthusiast | ARTH | IIEC Rise | MLOps