How to choose the best cloud platform for AI
Explore a strategy that shows you how to choose a cloud platform for your AI goals. Use Avenga’s Cloud Companion to speed up your decision-making.
We live in a dynamic world, full of disruptions, unexpected events and the need for even bigger digital acceleration of business.
Technology lifecycles are shorter than ever and the support windows (if they even exist) are shortened as well.
Open source components are the default choice for building digital solutions.
Security is a game-changer pushing the constant flow of security vulnerabilities and patches to ensure the security of digital products and the protection of customers’ data.
Ready-to-use APIs are more and more often replacing internal components and custom code.
Cloud services enable new runtime environments, along with entire DevOps and DataOps pipelines that provide thousands of APIs that are used to build digital products faster.
There is also a new range of cloud-based Integrated Development Environments.
The idea of selecting a commodity over the ‘best’ is about choosing the most popular technologies, solutions and patterns in a given area as the most effective IT strategy for building digital solutions.
There are always fierce discussions about which technology is better than another. Some camps are entrenched in their positions defending their opinions about what is the best technology.
There’s no universally accepted definition of criteria regarding what is the “best” and how technologies should be measured to really define which are better than others. For example, for some the maturity of the technology will be more important than the set of functionalities, some will simply hate it because of its origin (for instance Java devs vs. C#), some would like a more functional design (Scala), and then others will consider a new paradigm as an additional entry barrier.
So the simplest idea is to pick what is popular and widely accepted, as well as a proven track record that it is working, a significant market share, and an easy to obtain skill set.
Maybe there’s something better with 1% of the market share, but it hardly matters.
Even though it is hard to define, there’s often a consensus that a very popular technology in a given area is the most popular because it was considered much better than its predecessors. When enough developers and architects believe in it, we can observe a massive move to embrace those new technologies.
For instance, microservices (kind-of) became the first choice for solution architecture relatively quickly. Is it the best paradigm? There are lots of arguments that it is not, but this is kind of a well-known choice with its consequences, pros and cons.
Picking different technologies used to be strongly encouraged by the agile movement of the past. Let’s call it Agile 1.0, the naive edition. The liberal approach said that the team was supposed to choose whatever they thought would work the best to solve the problem. This meant very strong assumptions about the real priorities of the team (functional velocity vs. an itch to play with something new). Even worse, it did not take into account the fragmentation of the technology landscape in the entire solution or organization.
Of course, different technologies may be relatively easily mixed in microservices or similar architectures, and they will work together.
But, there will be consequences in the solution or at the organization level. First of all, this limits flexibility. One module of the system may need more people assigned to it, but it will be much harder to simply shift people from another module because of the differences in the technology stack. Secondly, this means an increased volatility related to losing team members, as it’s statistically easier to lose half of the entire capacity of a given skill set even when a few people leave, because of the fragmentation.
The most optimal rule of thumb is to use the latest versions of the main technologies, define them upfront, and be relentless in architectural governance to make sure there are no unexpected violations. This should fulfil the need to work with the latest technologies without risking the stability of the projects.
Working with the best but the rarest technologies means that it will be much harder to scale the team and to hire new people with the needed skills. It will also take time and effort to educate existing employees to embrace another technology stack. Of course, some people will like it, while others will react very negatively to abandoning their favorite technology set in which they have invested a lot of time and effort to learn the skills that they needed to use it.
That’s one of the key reasons why Java and C# are still the main technologies used to build enterprise applications. Despite being old, they are still under active development and play a significant role in their areas.
The single leaders in a given area usually integrate well with other leaders from different areas. These connections further enforce the benefits of commodities over the best. Many tools simply assume that one uses Kubernetes and Linux, for instance, and given their market dominance it’s hard to demand support for any more technologies. The other example could be Swagger, as some tools explicitly expect API definitions to be done using Swagger. Is it the best tool for API management? It… doesn’t matter.
In summary, the mutual connection between different popular technologies makes them more convenient to use and preferable even though they may not be the best.
The most popular container orchestrator, Kubernetes, is almost synonymous with the container environment in real-life scenarios.
There were other orchestrators before, such as Apache Mesos or Docker Swarm, and they are still there, but the power of popularity makes Kubernetes the default choice in more than 90% of the cases.
The more difficult decision is not to use Kubernetes and microservices at all. Mostly because of the popular demand from developers who consider other alternatives as obsolete and not even worth their time.
The language is both legacy and modern, because of the constant improvements. It is by far the most popular language to build backend applications for a vast majority of applications.
Microsoft’s alternatives, which are C# and Dotnet core, also remain popular despite decades on the market.
“There’s no week without a new front-end framework” might be a little bit exaggerated but it is based on the fact of a very active front-end frameworks development community.
But when it comes to serious long(er) term choices, only a few players remain. React might not be the best front-end framework, but in many areas it is the most popular, so therefore it is … the best choice.
There are so many different Linux distributions that it is hard even to track them. Almost everyone has their preferred Linux distribution and many of which are derived from another distribution, etc.
It may work well for a personal Linux workstation, but in the reality of cloud and enterprise applications, it is more common to select popular distributions with the best support. For instance, Red Hat Enterprise and the Ubuntu Server are very popular runtime environments. In the case of containers, the Alpine Linux distributions are often preferred for their smaller footprint, however, another alternative is to use distroless containers.
The data processing and data science universe is very much fragmented, but there are also clear winners in the competitive and fragmented technology landscape. For example, in the ever-popular machine learning space, we’ve got the main ecosystems of TensorFlow and PyTorch. There are many many choices, but when real-world criteria are applied, their number decreases immediately to a handful of technologies.
In the case of the public cloud, the consensus says there are three cloud providers: Amazon, Microsoft and Google. There are other cloud providers, but they are far less likely to be chosen over these top 3. Others think that the choice is only between Amazon and Microsoft or that a combination of both is the safest choice for the middle-term future. Most of the infrastructures within enterprises are hybrid clouds and will remain in place for a very long time.
Choosing common technologies and making use of them may create a sense of boredom and an increasing Fear of Missing Out (FOMO), especially during long projects. But, it doesn’t have to be as negative as it might seem.
It takes effort to explain why the following technologies and components were chosen, what would be the consequences of internal fragmentation, and what the real priorities of the project are. And, they are certainly not focused on embracing as many technologies as possible just to have a lot of fun.
The growing demand for both speed and resilience means both the modernness and stability of the technology stack.
Speaking of fun, it’s good practice to let developers play with new technologies in separated POC environments and activities, just to give them some fresh creative air to breathe and to test out other technologies, as a worthy outcome for the organization.
That’s a very good question. How does this strategy of safe choices stay in line with the need for improvements and innovations? It seems so counterintuitive.
There are at least two main scenarios.
One is when the new, even experimental, technology is bringing a significant business value and opportunity. Then it’s a critical part of the innovation or market disruption attempt. Let’s use early blockchain for business applications as an example. Now, these digital ledgers are kind of mainstream (at least from the technology point of view), but in their early stages, DLT solutions were very much experimental and high risk.
There are always new technologies, new options, and new trends that create business opportunities.
It is a good idea to start such projects with technological proof of concepts to verify if the technology is working, is stable enough, and what capabilities it can deliver. This is what Avenga Labs is doing for their clients. It is much more effective to figure out what technology capability does exist before diving into the full development process of the first MVP. For instance, new opportunities such as advancements in federated learning, data mesh, or API marketplaces may be worth trying.
The other, and much more common, scenario is to innovate in CX and UX areas when standard and popular technologies are more than enough to implement brave and fresh visions. Commodity over the best will also be of great help in achieving the maximum velocity of the development, as it will be much easier to scale up the teams, to fix bugs and to find workable solutions when using known and popular technologies.
Changing the front-end framework or trying different service mesh technologies during the MVP development is not going to help (at least not significantly) to deliver a new CX faster or to verify it with the real users (using feature toggles, for instance). In most cases, the focus should be on the business features, along with the resilience of the solution and its security.
I like the IT world because many things are counter-intuitive, because there’s no such thing as “common sense” when it comes to IT strategy and tactics. There are experience and practical observation that help us make the right decisions in the right context and at the right time.
Choosing popular and reliable technologies is going to deliver better results in most cases. More stability, easier to scale up the teams, and in case of trouble you won’t be alone with your problem.
Innovative technology focus projects should be started with a proof of concept validating the technology for the expected business purpose or by trying to find the business value of given technology.
It is a very fast and competitive digital world we live in today and the winners will be the ones who focus on the right things at the right time.
* US and Canada, exceptions apply
Ready to innovate your business?
We are! Let’s kick-off our journey to success!