By Boris Cergol, Head of AI at Comtrade Digital Services
Over the past few years, digitalisation has been a buzzword for many companies, as they try to take advantage of new technologies, including Artificial Intelligence (AI), cloud environments, and machine learning, to improve operations, centralise data, and enhance customer engagement. If you are interested in the last one, be sure to check out the best online machine learning courses around”.
However, in reality, this transition into the virtual or digital world isn’t without its challenges. One common problem for organisations is bridging the gap between the implementation of AI or data science and the delivery of a real business value.
In fact, when the data scientist role emerged, it was a sort of “unicorn” role because these individuals were expected to not only manage the technology infrastructure and build AI models, but also translate data into actionable predictions and communicate effectively with more business-minded colleagues.
Mixed results in the beginning
Of course, some companies had more success than others in terms of the AI application. Typically, larger companies with the economies of scale have done better, as have technology companies and those working in ecommerce or finance where the use and value of AI is more straightforward.
Many small businesses, who decided to embrace such technologies, have probably been disappointed with what they got out of it, especially considering the amount of hype surrounding it.
Therefore, the question became: how can AI have a bigger impact in the real world, spreading beyond the confines of the digital, to provide greater value to a wider circle of businesses and people?
Key enablers of AI
There are a number of enablers that are having an impact on the success rate of AI, the first of which is computing power. To date, it has already enabled a Deep Learning revolution and, since 2012, computing power has increased 300,000 times. Taking the ImageNet dataset as an example, the time needed to train a network to perform image classification has gone from 10 hours in 2017 to 88 seconds in 2019!
As well as having more compute available and this process being more cost-effective, the efficiency of algorithms is also increasing exponentially. Again, compared to a model in 2012, computing the same task today with the newer algorithms takes 44 times less compute.
Data is another strong enabler of AI and significant progress is being made in relation to overcoming the issue of data labelling within Deep Learning – because there’s so much data and not that much of it labelled, supervised models have been unable to make use of it, resulting in the need for data labellers. However, self-supervised models are addressing this as they are now able, in circumstances where there is a large amount of data and one data point is removed, to guess what has been removed based on the remaining data.
Auto Machine Learning (Auto ML) methods are also on the rise, helping to automate classic data science research and acting as a strong driver in terms of a real-world AI application. In fact, in some circumstances, end users are already directly interacting with Auto ML models. The delivery of ML on the edge devices is another exciting area of development which supports AI implementation: for example, WeWalk′s canes for visually impaired people, designed to give users more independence.
Ethical considerations will inevitably arise
Ethics can often be an obstacle to AI ad ML implementation. What happens with all the data? If different users are accessing the same model, what is the impact on privacy? Does the payoff of using AI for health purposes justify the use of data? How can we combat AI being intentionally used for deception? And so on…
Take Generative Adversarial Networks (GAN) as an example. These networks can generate images that are representative of training set data and then a second network tries to determine if the image is from the set or fake. By competing against each other, in the end, the generator learns how to create images that look like they were from the training set. While it has the potential to be used in retail and cosmetics to digitally show consumers what an outfit or makeup product might look like on them – a sort of virtual “try-on” experience – it could also be used in advertising to deceive the audience. Another instance is using digital avatars or deep fakes to build a fake persona on social networks.
For the most part though, AI is being used for the right reasons and is generating some impressive results. One area in particular is natural language processing, which has led to the next generation of chatbots that are much more receptive and customer-friendly than before.
So, what comes next?
In the future, there’s a strong possibility that each of us could have a digital companion which would have a large amount of data on us, but use it to watch over us and be the ultimate example of a supportive AI (which, to date, includes creations such as the shoe that monitors sports performance).
There are also signs that we aren’t far away from unsupervised reinforcement learning or imitation within robotics. In the not-too-distant future, there is potential for robots to learn like children do – by witnessing their behaviours, playing around with them and, accordingly, adapting how they act.
Undoubtedly, there’s a great deal of research and development needed in the area of the real-world application of AI, but progress is certainly being made and models are being put to good use in a range of industries, including healthcare.
As the well-known researcher Jürgen Schmidhuber recently said: “Although the real world is much more complex than virtual worlds, and less forgiving, the coming wave of real world AI or simple real AI will be much bigger than the previous AI wave, because it will affect all of production, and thus a much bigger part of the economy”.