Machine Learning Demystified

‘Machine learning’ simply means software learning something (and improving automatically) through exposure to information. If the software just collects and stores information rather than improving automatically, this wouldn’t be thought of as machine learning. The most popular technical solutions to achieve this automatic improvement are those using artificial neural networks, which are connected arrangements of virtual neurons inside a computer with similarities to the way physical neurons are networked in human and animal brains, but also with fundamental differences.

‘Deep learning’ is a subset of machine learning that uses multiple layers (hence the term ‘deep’) which model progressively higher level relationships in the data, for instance in image recognition the first layer may turn the raw image data into a series of edges, the next group those edges into shapes and the next determine what kind of object the image relates to.

‘Artificial Intelligence’ refers to the overarching field where software displays behaviour we would consider in some way ‘intelligent’, this includes the uses of machine learning mentioned here.

Modern Generative AI


Current ‘Generative AI’ methods are all based on Machine Learning, where the model has been trained in order to learn what kind of output is desirable for a given input, this models used are often very large and require a lot of training to get the amazing results we have seen lately.
Image generators like Stable Diffusion and DALL-E have been trained on billions of images and accompanying text, in order to have learned the correlation between descriptive text and a suitable image.
Generative Chat like ChatGPT, Bard or Llama2-Chat have first been trained on terrabytes of text, to learn the relationship between the start of a peice of text and the rest, then trained on human conversations, to predict the most likely response given the contents of the chat so far. It is important to understand that is all they are doing, there is no thought human-like thought process, just a (very complex) statistical prediction of the most likely next entry in the chat based on what has gone before and what other information is supplied.

Benefits to business
Businesses in possession of large data sets are able to utilise machine learning to extract knowledge and actionable intelligence be which can integrated into business processes and operational activities to respond to changing market demands or business circumstances, businesses can face the risk of a competitive disadvantage if they either don't collect quality data to the extent their competitors or fail to exploit their data while their competitors make optimal use of theirs, which increasingly means utilising machine learning rather than just manual data analytics.

As far back as 2017 a survey of 375 leading businesses by MIT Technology Review found that 60% of respondents had already begun to implement a machine learning strategy.
Machine Learning Frameworks
Just as you wouldn’t rewrite low level operating system components when developing a typical software application, there’s no need to develop the low level functions to work with artificial neural networks when developing a machine learning solution, excellent frameworks already exist for this and have been made open source despite massive investment in them from the biggest names in the software industry. TensorFlow developed by Google is widely seen as the leading machine learning platform and is certainly the most popular, it’s open source under the permissive Apache Software Licence. Other strong contenders under a permissive open source license are PyTorch developed by Facebook and Apache MXNet, supported by Intel and Amazon along with several top universities. These frameworks and others like them allow developers and data scientists to experiment with data and implement solutions without worrying about the low level implementation of artificial neural networks, and to benefit from the optimisations that come from all the research effort that has gone into them.

In practise, setting up an artificial neural network to learn automatically from a particular set of information does require a fair amount of experimentation and human brain power. Somewhat ironically and despite the best efforts of researchers, automating this to a high standard using AI still seems to be some way off. The best ‘Auto ML’ systems such as Amazon’s SageMaker Autopilot can’t come close to competing with a skilled data scientist or ML engineer using a high level framework, although for some projects they can help make that person more productive.
Machine Learning Process
The process of feeding information to an artificial neural network in order to improve its suitability for a particular use case is referred to as ‘training’. Where suitable complete data has been selected specifically for training with the intention of the neural network ‘learning’ to model a particular relationship within it (for instance how customers respond to adverts depending on their visual attributes), this is referred to as ‘supervised learning’, and once the neural network has been through ‘training’ it’s referred to as a model as it models the relationships in the data it has been trained with. ‘Supervised learning’ contrasts with ‘unsupervised learning’ in that the during the training process the outputs are known, this sometimes involves manually labelling or tagging the data records with the kind of output you’d like the model to be able to produce automatically once training is complete, whereas in ‘unsupervised learning’ the machine is learning patterns in data that doesn’t have known outputs, for instance optimally segmenting a customer base so that going forward the software is able to determine which segment a new customer best matches. ‘Reinforcement learning’ is where models are trained to make optimal decisions in a particular environment. This has applications in things from video games to self-driving cars and is often what many people think of when they think of Artificial Intelligence. The concept of machine learning isn’t all that new, artificial neural networks were conceived in the 1940s and machine learning was defined by IBM researcher Arthur Samuel in 1959.
What it can’t do
To start with, although reinforcement learning can impressively machines to mimic certain natural animal behaviours we may consider intelligent, a fully autonomous robot displaying human-like general intelligence is the preserve of sci-fi fiction and will remain so until new technologies are invented, that has been the case throughout history and remains the case today.

Importantly for business, machine learning isn’t going to be able to make useful predictions without first learning from relevant data, if you’ve trained a model about trends in the property market on data from a period when prices were consistently rising it will have no way of predicting a fall in prices.

Sometimes data is relevant but of poor quality due to being filled with errors, low accuracy, or incomplete. While in theory machine learning could in some circumstances be trained to sort the wheat from the chaff, in practise human cognition is going to produce better results at cleaning the data, or at least writing code to clean the data, if good results are indeed possible.

Problems that can be solved efficiently using conventional software won’t benefit from machine learning, and using it will be much less efficient than conventional software, although that’s not to say there won’t be data output that’s relevant to solve different problems with machine learning. Many problems can break down into smaller problems, some of which are more suited to machine learning and some of which are more suited to conventional software.
What it can do
Machine learning excels at solving some problems that can’t be solved or can’t be solved efficiently using conventional software and are too time consuming or involve too much information to be solved by humans.