Amazon Web Services (AWS) is helping NASA’s Jet Propulsion Laboratory (NASA JPL) reach an inaugural and innovative milestone in deep space exploration. On Thursday, February 18, NASA’s Mars Perseverance rover landed on Mars, after its 7-month, 300-million-mile journey from Earth. This is the first planetary NASA mission, with mission-critical communication and transfer of telemetry data in the cloud. During Perseverance’s mission on Mars, the science and engineering data will be processed and hosted in AWS, enabling the Mars 2020 mission to benefit from the scalability, agility, and reliability of the cloud.
Now on the surface and after completing initial checkouts, the Mars Rover team is receiving hundreds of images from Mars each day from a record number of cameras, resulting in thousands of images over Perseverance’s time on the planet. The cloud helps NASA JPL efficiently store, process, and distribute this high volume of data.
By using AWS, NASA JPL is able to process data from Mars, on Earth, faster than ever before. The increased processing speed will help NASA JPL make faster decisions on the health and safety of the rover. This information is critical for scientists and engineers to plan the rover’s next day activities. The rover requires visibility to drive, so it is important for the team to be able to send the next batch of instructions back to the rover within a specific timeframe. The increased efficiency will allow Mars 2020 to accomplish its ambitious goal of collecting more samples and driving longer distances during the prime mission, compared to previous rovers.
AWS also powers the Mars mission website. The website will be able to scale up to meet demand at any given time, with millions of visitors anticipated at peak times.
“AWS is proud to support NASA JPL’s Perseverance mission,” said Teresa Carlson, vice president, worldwide public sector and industries at AWS. “From the outset, AWS cloud services have enabled NASA JPL in its mission to capture and share mission-critical images, and help to answer key questions about the potential for life on Mars.”
Exploring with NASA JPL
To help send data back to Earth, the Mars Rover is equipped with sensors, cameras, and microphones. The sensors will gather scientific data like atmospheric information, wind speeds, and weather. The microphones will collect the sounds of the planet. This data will be processed by JPL and made publicly available so viewers can explore Mars alongside NASA JPL.
The public will also be able to track the Mars Rover location on a map and in a 3D experience that puts them in the middle of Mars and seeing the planet from the view of Perseverance. Viewers will also be able to see the raw images from the planet through several cameras whose images will be made available to the public.
AWS first started working with the NASA JPL Mars 2020 Rover team more than five years ago in preparation for this mission. Before Perseverance left Earth, the public also participated in the “Name the Rover” essay contest and signed up to send their own names to Mars in the “Send Your Name to Mars” campaign. Both the contest and the campaign websites were powered by AWS. A middle school student from Virginia won the competition to name the Mars Rover, and almost 11 million names are stenciled on a chip that is flying onboard Perseverance. Already more than four million people have signed up to send their names aboard the next mission, taking off in 2026.
Participate and learn more
Explore as Perseverance begins sending back data. Vote for your favorite image of the week, receive daily weather reports, and more. Check out more about AWS’s work with NASA JPL, and the Mars Rover. Read more about how the AWS Aerospace and Satellites team is enabling successful space missions.
As part of the Formula 1 70th anniversary year, the pinnacle of motorsport has been working with Amazon Web Services (AWS) to compare driver speeds throughout the ages and define an ultimate ranking of the fastest drivers ever. Fastest Driver, the latest F1 Insight powered by AWS, is a unique tool that uses machine learning technology to provide an objective, data-driven ranking of all drivers from 1983 through present day, by removing the F1 car differential from the equation.
Ranked by qualifying speed – the fastest that all drivers traverse the course during a Grand Prix weekend – three-time World Champion Ayrton Senna came out on top, with the Brazilian closely followed by seven-time World Champion, Michael Schumacher with a time differential of +0.114 seconds to Senna, and current World Champion Lewis Hamilton rounding out the top three, achieving a relative time of +0.275 seconds.
By comparing teammates in qualifying sessions, the machine learning-based tool focuses on a driver’s performance output, building a network of teammates across the time-range, all interlinked, and therefore comparable. By comparing laptimes between teammates only, the Fastest Driver algorithm effectively normalises for car and the team performance. Overall, this builds up a picture of how drivers from different generations compare, by analysing the purest indication of raw speed – the qualifying lap.
As a part of F1 Insights, it also provides a unique understanding into a similar exercise F1 teams undergo to define their target drivers for upcoming seasons, but is here applied over a 37-year period of F1 history, despite the differences in rules and machinery.
By using AWS’s machine learning technology, data scientists from F1 and the Amazon Machine Learning (ML) Solutions Lab have for the first time in history created a cross-era, objective, complex, data-driven ranking of driver speed – the Fastest Driver insight.
The output from the Fastest Driver insight is a dataset with rankings based on speed (or qualifying times) in descending order of outputs including: Driver, Rank (integer), and Gap to best (seconds, to 0.001s).
The full top 10 driver rankings includes current F1 stars Max Verstappen, Charles Leclerc, and Sebastian Vettel, former World Champions Fernando Alonso and Nico Rosberg, and fan favourites Heikki Kovalainen and Jarno Trulli. Further drivers will be announced on F1.com in the coming weeks as the season continues and more data is analysed.
A detailed explainer video and analysis of the data can be found here..
Ranking
Driver
Timings
1.
Ayrton Senna
0.000
2.
Michael Schumacher
0.114
3.
Lewis Hamilton
0.275
4.
Max Verstappen
0.280
5.
Fernando Alonso
0.309
6.
Nico Rosberg
0.374
7.
Charles Leclerc
0.376
8.
Heikki Kovalainen
0.378
9.
Jarno Trulli
0.409
10.
Sebastian Vettel
0.435
Dean Locke, Director of Broadcast & Media F1 said: “This has been such an exciting project to work on, stripping back the man from the machine and looking at a wealth of data of each driver throughout history. With the help of AWS we have been able to address something that has been asked for many years and rank drivers by the one raw attribute of pure speed in one flying lap, across the ages, regardless of how good their car is or isn’t.”
Rob Smedley, Director of Data Systems, F1 said: “Within the team environment this type of modelling is used to make key decisions on driver choices. As drivers are more often than not the most expensive asset of the team it is important that the selection process is as robust as possible. A process such as this therefore would be deployed by the F1 team’s strategists in order to present the most objective and evidence-based selection possible. Fastest Driver enables us to build up a picture of how the drivers compare, by analysing the purest indication of raw speed, the qualifying lap – and it’s important to note this pure speed is the only element of the vast driver armoury we are analysing here, to showcase the quickest drivers ever, which is very exciting.”
Dr. Priya Ponnapalli, Principal Scientist and Senior Manager, Amazon ML Solutions Lab, AWS, said: “We’re excited to be able to continue to collaborate with an organisation like F1, which has such a data-rich catalogue of information. With machine learning, there are a number of opportunities to apply the technology to answer complex problems, and in this case, we hope to help settle age-old disputes with fans by using data to inform decisions. For us at AWS, it’s exciting to see machine learning being used in a way that everyone can relate to.”
Today, Amazon Web Services Inc. (AWS), an Amazon.com company announced the general availability of Amazon CodeGuru, a developer tool powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code. Amazon CodeGuru Reviewer helps improve code quality by scanning for critical issues, identifying bugs, and recommending how to remediate them.
Amazon CodeGuru Profiler helps developers find an application’s most expensive lines of code along with specific visualizations and recommendations on how to improve code to save money. Amazon CodeGuru can be enabled with a few clicks in the AWS console, customers only pay for their actual use of Amazon CodeGuru, and it’s easy and affordable enough to run on every code review and application in an organization. To get started with Amazon CodeGuru, visit http://aws.amazon.com/codeguru
Just like Amazon.com, AWS customers write a lot of code. Software development is a well understood process. Developers write code, review it, compile the code and deploy the application, measure the performance of the application, and use that data to improve the code. Then, they rinse and repeat. Yet, all of this process doesn’t matter if the code is incorrect to begin with, which is why teams perform code reviews to check the logic, syntax, and style before new code is added to an existing application code base.
Even for a large organization like Amazon, it’s challenging to have enough experienced developers with enough free time to do code reviews, given the amount of code that gets written every day. And even the most experienced reviewers miss problems before they impact customer-facing applications, resulting in bugs and performance issues. Even after an application is up and running, developers still need to monitor performance to make sure it is running efficiently.
Typically, developers monitor application performance through logging, which allows them to observe how much time an application takes to complete a task. However, logging is cumbersome to implement (requiring developers to instrument every function in the application), negatively impacts application performance, and doesn’t measure other metrics like CPU utilization that contribute to compute costs, leaving developers without a tool to effectively identify cost-saving opportunities for applications in production. Organizations often incur unnecessarily higher costs (sometimes upwards of tens of millions of dollars) for running applications that are in need of further optimizations because these applications consume more CPU and infrastructure than they should.
Amazon CodeGuru is a new developer service that uses machine learning to automate both code reviews during application development and profiling of applications in production. Amazon CodeGuru has two components:
Code Reviewer: Developers can use machine learning-powered Amazon CodeGuru Reviewer to automatically flag common issues that deviate from best practices (potentially leading to production issues), while also providing specific recommendations on how to fix them, including example code and links to relevant documentation. For code reviews, developers commit their code as usual to the repository of their choice (e.g. GitHub, GitHub Enterprise, Bitbucket Cloud, AWS CodeCommit) and add Amazon CodeGuru Reviewer as one of the code reviewers, with no other changes to the normal development process. Amazon CodeGuru Reviewer analyzes existing code bases in the repository, identifies hard to find bugs and critical issues with high accuracy, provides intelligent suggestions on how to remediate them, and creates a baseline for successive code reviews. To do so, Amazon CodeGuru Reviewer opens a pull request and automatically starts evaluating the code using machine learning models that have been trained on several decades of code reviews at Amazon.com and over ten thousand open-source projects on GitHub. If Amazon CodeGuru Reviewer discovers an issue (e.g. thread safety issues, use of un-sanitized inputs, inappropriate handling of sensitive data such as credentials, resource leaks, redundant copy and pasted code, deviation from best practices for using Java and AWS APIs, etc.), it will add a human-readable comment to the pull request that identifies the line of code, specific issue, and recommended remediation. Amazon CodeGuru Reviewer also provides a pull request dashboard that lists information for all code reviews (e.g. status of the code review, number of lines of code analyzed, and the number of recommendations). Users may also give feedback on CodeGuru Reviewer recommendations by clicking on a thumbs up or thumbs down icon, which helps improve recommendations over time using machine learning.
Application Profiler: Developers can use machine learning-powered Amazon CodeGuru Profiler to identify the most expensive lines of code (in terms of potential estimated dollar savings) by helping them understand the runtime behavior of their applications (including serverless applications running via AWS Lambda or AWS Fargate), identify and remove code inefficiencies, improve performance, and significantly decrease compute costs. For example, Amazon’s internal teams have used Amazon CodeGuru Profiler on more than 30,000 production applications, resulting in tens of millions of dollars in savings on compute and infrastructure costs. Further, the Amazon.com Consumer Payments team used Amazon CodeGuru Profiler from 2017 to 2018 to gain efficiencies for the biggest shopping day of the year – Prime Day – and realized a 325% efficiency increase in CPU utilization across their applications and lowered costs by 39%. To get started with Amazon CodeGuru Profiler, customers install a small, low-profile agent in their application that can observe the application run time and profile the application to detect code quality issues (e.g. recreation of expensive objects, use of inefficient libraries, evaluating null or undefined values, etc.) along with details on latency and CPU usage. Amazon CodeGuru Profiler then uses machine learning to automatically identify code methods (reusable blocks of code also called functions) and anomalous behaviors that are most impacting latency and CPU usage. This information is brought together in a profile that clearly shows the areas of code that are most inefficient and provides visualizations that identify the code methods that are creating bottlenecks, along with a time-series graph of detected anomalies. The profile includes recommendations on how developers can fix issues to improve performance and also estimates the cost (in dollars) of continuing to run inefficient code so developers can prioritize remediation. Developers can now take advantage of the same technology deployed at Amazon to improve application performance and customer experiences, while also eliminating their most expensive lines of code.
“Our customers develop and run a lot of applications that include millions and millions of lines of code. Ensuring the quality and efficiency of that code is incredibly important, as bugs and inefficiencies in even a few lines of code can be very costly. Today, the methods for identifying code quality issues are time-consuming, manual, and error-prone, especially at scale,” said Swami Sivasubramanian, Vice President, Amazon Machine Learning, Amazon Web Services, Inc. “CodeGuru combines Amazon’s decades of experience developing and deploying applications at scale with considerable machine learning expertise to give customers a service that improves software quality, delights their customers with better application performance, and eliminates their most expensive lines of code.”
Amazon CodeGuru is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (London), EU (Frankfurt), EU (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) with availability in additional regions in the coming months.
Teams at more than 170,000 companies rely on Atlassian products to make teamwork easier, and help them organize, discuss, and complete their work. “At Atlassian, many of our services have hundreds of check-ins per deployment. While code reviews from our development team do a great job of preventing bugs from reaching production, it’s not always possible to predict how systems will behave under stress or manage complex data shapes, especially as we have multiple deployments per day,” said Zak Islam, Head of Engineering, Tech Teams, Atlassian. “When we detect anomalies in production, we have been able to reduce the investigation time from days to hours and sometimes minutes thanks to Amazon CodeGuru’s continuous profiling feature. Our developers now focus more of their energy on delivering differentiated capabilities and less time investigating problems in our production environment.”
EagleDream Technologies is a trusted cloud-native transformation company and APN Premier Consulting Partner for businesses using AWS. “Part of application development is the creation of performant systems as well as the feedback and continuous improvement of existing systems. This starts with a strong architectural foundation but often ends in the details of the application code. When our team at EagleDream is digging into these details there are a variety of tools at our disposal, and using both static and dynamic analysis is helpful,” said Dustin Potter, Principal Could Solutions Architect at EagleDream Technologies. “We’ve found that the runtime analysis offered by the CodeGuru Profiler is one of the simplest and fastest to get running, and generates insights into the application code that are easy to remediate. Using this tool we’ve been able to quickly hone in on portions of an application that represent bottlenecks that would have otherwise been difficult to spot, then develop changes that can be implemented and tested with a very fast feedback loop. This allows us to continuously deliver and improve our own workloads and the workloads of our customers, making them more performant while saving on cost at the same time.”
DevFactory manages over 600 million lines of code across over a hundred enterprise software products. “A key component of our future roadmap is to turn all our products into cloud-native products that leverage the incredible array of managed services available at AWS. Rebuilding old school, on-prem architectures, and transforming them for the cloud brings a whole set of engineering challenges that range from keeping abreast with all the latest services to adjusting to the paradigm shift that is associated with these architectures,” said Rahul Subramaniam, CEO at DevFactory. “CodeGuru is an incredibly valuable tool that helps optimize our products’ performance while making sure that we are leveraging these services with all the best practices in place. Without tools like CodeGuru Reviewer, we wouldn’t have been able to rewrite entire products like FogBugz to be AWS cloud-native. We are now using CodeGuru Profiler to optimize a number of products including EngineYard’s container-based ‘No Ops’ platform and well as the next generation of the Jive collaboration platform.”
RENGA, Inc. is a company that operates one of Japan’s largest condominium reviews and evaluation sites used by more than 1 million people every month. “Poor quality code adds complexity to the system and can become technical debt at some point. On the other hand, as long as consistent code quality is maintained, scaling the system won’t prevent developers from extending features as the code itself is simple,” said Kazuma Ohara, CTO at RENGA. “At RENGA, the code review process is important, however, it should not increase workload for reviewers or become a bottleneck in development. Powered by machine learning, Amazon CodeGuru Reviewer helped us automate code reviews and reduced the workload required on reviewers. We could seamlessly integrate Amazon CodeGuru Reviewer into our existing development pipeline. Furthermore, learning the best practices of coding – which we were not aware of – has helped us develop with more confidence.”
YouCanBook.me is a small, independent, and fully remote team, that loves solving scheduling problems all over the world. “Our use of Amazon CodeGuru Profiler is very simple but extremely valuable,” said Sergio Delgado, Engineering Team Lead at YouCanBook.me. “We’ve optimized our worst performant service to reduce its latency by 15% for the 95th percentile in a typical work day.”
Today, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), and Formula 1 (F1) (NASDAQ: FWONA, FWONK) are introducing six new, real-time racing statistics that will roll out through the 2020 season, beginning with the launch of “Car Performance Scores” at the season opening Grand Prix in Spielberg, Austria July 3-5. “Car Performance Scores” isolate an individual car’s performance and allows race fans to compare its performance to that of different vehicles head-to-head.
The new set of statistics to be released this season will use a range of AWS services, including machine learning, to give fans the ability to compare their favourite drivers and cars and better predict race outcomes. Learn more at: https://aws.amazon.com/f1/.
With 300 sensors on each F1 race car generating more than 1.1M data points per second transmitted from the cars to the pit, Formula 1 is a truly data-driven sport where much of the thrill comes from extracting exciting details on performance statistics. F1 relies on the breadth and depth of AWS services to stream, process, and analyze that flood of data in real-time, and then present it in a meaningful way for F1 global TV viewers.
The new “Car Performance Scores” insight will display as an on-screen graphic that provides fans with a complete breakdown of a car’s total performance using four core metrics: Low-Speed Cornering, High-Speed Cornering, Straight Line, and Car Handling. The new graphic will illustrate how those metrics compare from one car to another, enabling race fans to gauge a given car’s relative performance in those different areas and see where each team and driver is leading the pack or losing crucial time to their rivals.
F1 and AWS previously announced six F1 Insights, including Exit Speed, Predicted Pit Stop Strategy, Pit Window, Battle Forecast, Pit Strategy Battle, and Tyre Performance and will roll out the following six additional “F1 Insights powered by AWS” stats as on-screen graphics from July through December of this season, offering fans more visibility into the split-second decision-making and action on the track, as well as behind the pit wall where the team strategists operate:
Car Performance Scores: Isolates an individual car’s performance and allows race fans to compare its performance to that of different vehicles head-to-head (debuts July 3-5 at the FORMULA 1 ROLEX GROSSER PREIS VON ÖSTERREICH GRAND PRIX 2020).
Ultimate Driver Speed Comparison: Allows race fans to see how their favorite drivers compare to other drivers in history, dating back to 1983, to help determine the fastest driver of all time (debuts August 7-9 at the EMIRATES FORMULA 1 70th ANNIVERSARY GRAND PRIX 2020).
High-Speed/Low-Speed Corner Performance: Allows fans to see how well drivers tackle the fastest bends on the track travelling at more than 175 kph/109 mph and slow cornering (below 125 kph/78 mph) compared to other vehicles, which is critical to lap time (debuts August 28-30 at the FORMULA 1 ROLEX BELGIAN GRAND PRIX).
Driver Skills Rating: Breaks down and scores driver skills, based on the most important factors for overall performance, to help identify the best “total driver” on the track. By calculating varying subsets of qualifying round performance, starts, race pace, tire management, and overtaking/defending styles, this insight will provide an overall driver ranking (debuts the second half of the season).
Car/Team Development & Overall Season Performance: As the season unfolds, this will plot a team’s cumulative performance from race to race to uncover the development rates of each team (debuts the second half of the season).
Qualifying and Race Pace Predictions: Gather data from practice and qualifying laps to predict which team is poised for success ahead of each race session. These predictions will create heightened intrigue and excitement for the Saturday qualifying session and Sunday race (debuts the second half of the season).
To create these new insights, Formula 1 will use 70 years of historical race data stored in Amazon Simple Storage Service (Amazon S3), combined with live data that is streamed from sensors on F1 race cars and the trackside to the cloud through Amazon Kinesis, a service for real-time data collection, processing, and analysis. F1 engineers and scientists will use this data to leverage machine learning (ML) models with Amazon SageMaker, AWS’s service for building, training, and deploying ML models.
F1 is able to analyze race performance metrics in real-time by deploying those ML models on AWS Lambda, which runs code without the need to provision or manage servers. All of the insights will be integrated into the international broadcast feed of F1 races around the globe, including its digital platform F1.tv, helping fans to understand the split-second decisions and race strategies made by drivers or team strategists that can dramatically affect a race outcome.
“Over the past two years, Formula 1 has embraced AWS’s services to perform intense and dynamic data analysis. The F1 Insights we’re delivering together are bringing fans closer to the track than ever before, and unlocking previously untold stories and insights from behind the pit wall,” said Rob Smedley, Chief Engineer of Formula 1. “We’re excited to be expanding this successful relationship to bring even more insights to life, allowing fans to go deeper into the many ways that drivers and racing teams work together to affect success.”
“Formula 1 racing mixes physics and human performance, yielding powerful, but complex data that AWS is helping them to harness. Our existing relationship with F1 has already produced statistics that have brought fans into the race paddocks, and our study of race car aerodynamics is influencing vehicle designs for the 2022 season,” said Mike Clayville, Vice President, Worldwide Commercial Sales at AWS. “This year, we’re thrilled to extend the power of F1 data in the cloud and unlock new insights that help fans understand more of F1’s rich complexity.”
To learn more about the first 2020 F1 Insight, Car Performance Scores, please click here. For more information about AWS and its involvement with Formula 1, please visit: https://aws.amazon.com/f1/. For additional news on how AWS is helping Formula 1 develop the next-generation race car, visit: https://aws.amazon.com/f1/news/.
AWS Ground Station is a fully managed service that provides you global access to your space workloads. AWS Ground Station enables you to downlink data and provide satellite commands across multiple regions quickly, easily, and cost-effectively without having to worry about building or managing their own ground station infrastructure. AWS Ground Station is available today in six AWS Regions around the world. To see a list of supported regions, please visit the Global Infrastructure Region Table webpage.
The recency of data is particularly critical when it comes to tracking and acting upon fast-moving conditions on Earth. This timeliness depends on frequent communications between ground stations and satellites, which can only be achieved with a large, global footprint of antennas maintaining frequent contact with orbiting satellites. The AWS Ground Station deployment in Ireland provides a second region in Europe to communicate with your satellite. Stockholm is the other AWS Region in the EU that offers AWS Ground Station.
Customers can easily integrate their space workloads with other AWS services in real-time using Amazon’s low-latency, high-bandwidth global network. Customers can stream their satellite data to Amazon EC2 for real-time processing, store data in Amazon S3 for low cost archiving, flow data through Amazon Rekognition for imaging analysis, or apply AI/ML algorithms to satellite images with Amazon SageMaker.
To learn more about AWS Ground Station, visit here. To get started with AWS Ground Station, visit the AWS Management console here.
Today, Amazon Web Services Inc, announced the general availability of Amazon Detective, a new security service that makes it easy for customers to conduct faster and more efficient investigations into security issues across their AWS workloads. Amazon Detective automatically collects log data from a customer’s resources and uses machine learning, statistical analysis, and graph theory to build interactive visualizations that help customers analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. There are no additional charges or upfront commitments required to use Amazon Detective, and customers pay only for data ingested from AWS CloudTrail, Amazon Virtual Private Cloud (VPC) Flow Logs, and Amazon GuardDuty findings. To get started with Amazon Detective, visit https://aws.amazon.com/detective/.
When customers face a security issue like compromised user credentials or unauthorized access to a resource, security teams must conduct an investigation to understand the cause, assess the impact, and determine the remediation steps. Before an investigation can even begin, customers must first collect and combine terabytes of potentially relevant data from network, application, and security monitoring systems, and make it available in a way that allows their security analysts to infer related anomalies. In order to explore the data, analysts rely on data scientists and engineers to turn seemingly simple questions like “is this normal?” into mathematical models and queries that can help produce answers. Customers then typically build custom dashboards that analysts use to validate, compare, and correlate the data to reach their conclusions. Security teams must continually re-establish baselines of normal behavior, understand new patterns of activity, and revisit application configurations as resources, accounts, and applications are added or updated in an environment. These complex and time-consuming tasks impede security teams’ ability to quickly investigate and respond to security issues.
Amazon Detective helps security teams conduct faster and more effective investigations. Once enabled with a few clicks in the AWS Management Console, Amazon Detective automatically begins distilling and organizing data from AWS CloudTrail, Amazon VPC Flow Logs, and Amazon GuardDuty findings into a graph model that summarizes resource behaviors and interactions observed across a customer’s AWS environment. Using machine learning, statistical analysis, and graph theory, Amazon Detective produces tailored visualizations to help customers answer questions like “is this an unusual API call?” or “is this spike in traffic from this instance expected?” without having to organize any data or develop, configure, or tune their own queries and algorithms.
Amazon Detective’s visualizations provide the details, context, and guidance to help analysts quickly determine the nature and extent of issues identified by AWS security services like Amazon GuardDuty and AWS Security Hub. Amazon Detective’s graph model and analytics are continuously updated as new telemetry becomes available from a customer’s AWS resources, allowing security teams to spend less time tending to constantly changing data sources. By letting the Amazon Detective service perform the necessary data sifting, security teams can more quickly move on to remediation.
“Even when customers tell us their security teams have the tools and information to confidently detect and remediate issues, they often say they need help when it comes to understanding what caused the issues in the first place,” said Dan Plastina, Vice President for Security Services at AWS. “Gathering the information necessary to conduct effective security investigations has traditionally been a burdensome process, which can put crucial in-depth analysis out of reach for smaller organizations and strain resources for larger teams. Amazon Detective takes all of that extra work off of the customer’s plate, allowing them to focus on finding the root cause of an issue and ensuring it doesn’t happen again.”
Amazon Detective is available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (Sao Paulo) regions, with more regions coming soon.
T-Systems, a subsidiary of Deutsche Telekom, is one of the world’s leading digital service providers. “As part of protecting our clients’ cloud applications and services, T-Systems’ security experts analyze billions of security-relevant events every day,” said Andrej Maya, Cloud Solutions Architect for T-Systems. “This has traditionally required using custom log management solutions that take considerable time and resources to maintain. Amazon Detective simplifies our security monitoring and helps our security analysts quickly understand potential issues without the complexity of managing the underlying data ourselves.”
WarnerMedia is a leading media and entertainment company that creates and distributes premium and popular content to global audiences. “Large security organizations are tasked with protecting huge environments with diverse workloads from a multitude of threats, while the smaller organizations I talk to often don’t have the resources to replicate the tooling and expertise of their bigger counterparts,” said Chris Farris who leads public cloud security for WarnerMedia and teaches Cloud Security for the SANS Institute. “Amazon Detective will help both of these groups reach faster, better-informed conclusions to their security investigations. It does the hard work of aggregating and analyzing high-volume telemetry sources like VPC Flow logs and CloudTrail. Larger organizations will see major efficiencies, and small teams will have access to information and tooling that they’d have a hard time collecting and building on their own.”
Expel provides transparent managed security, on-prem and in the cloud. “We have customers of all shapes and sizes running a diverse array of workloads on AWS, so it’s critical that we have high-quality data sources that can aid us in conducting fast and accurate security investigations,” said Peter Silberman, chief technology officer at Expel. “Amazon Detective offers our customers an additional layer of insight about what’s happening in their environment, which gives our security analysts more data and context to use during investigations without adding complexity to that process. With Amazon Detective, we’ll be able to process specific types of alerts faster, which means reducing investigation time and getting quicker, more detailed answers to our customers about what happened.