2017 saw many data leaks and breaches that stemmed from poorly configured Amazon AWS configurations, or more specifically, configurations of AWS S3 buckets. These weren’t small leaks, either. As a result, Verizon, Dow Jones & Co and the WWE found themselves in the media for the wrong reasons.
And they’re not the only ones. A quick Google search would show you that 2018 is running in a similar vein as last year, with many organisations failing to follow relatively simple steps when administering their public cloud environment.
AWS itself is relatively simple to get up and running, but like most platforms, dig a bit (or a lot deeper), and it gets a lot more complex. So why are large organisations making rather basic errors?
I’ve yet to come across any organisation that complains of too many resources or too many skilled people, and it’s widely acknowledged there’s a skills shortage in the industry, so I don’t expect this to change anytime soon. But that’s surely the most likely reason why simple configurations errors continually take place. Whether it’s Amazon AWS, Microsoft Azure or Google Cloud, they all come with many different options and settings that can make implementation and configuration a mine field. If an employee has experience with Azure, they couldn’t automatically succeed if asked to implement a solution in AWS. Whilst the theory is identical, the actual configuration is vastly different, which is what can lead to fundamental mistakes being made.
Let’s play devil’s advocate and assume that the company implementing a cloud service does have adequate resources and skill sets available. How about the day-to-day management of these services? How can you keep on top of changes being made to the cloud configuration settings or content being added to the buckets (or containers)? How (Read more…)