It’s sometimes difficult to distinguish the reality of technology from the hype and marketing messages that bombard our inboxes daily. In just the last five years, we’ve probably heard too much about the metaverse, blockchain and virtual reality, for example. At present, we’re in the midst of a furore about the much-abused term ‘AI’, and time will tell whether this particular storm will be seen as a teacup resident.
Artificial Intelligence News spoke exclusively to Jon McLoone, the Director of Technical Communication and Strategy at of one the most mature organisations in the computational intelligence and scientific innovation space, Wolfram Research, to help us put our present concepts of AI and their practical uses into a deeper context.
Jon has worked at Wolfram Research for 32 years in various roles, currently leading the European Technical Services team. A mathematician by training and a skilled practitioner in many aspects of data analysis, we began our interview by having him describe Wolfram’s work in an elevator pitch format.
“Our value proposition is that we know computation and Wolfram technology. We tailor our technology to the problem that an organisation has. That’s across a broad range of things. So, we don’t have a typical customer. What they have in common is they’re doing something innovative.”
“We’re doing problem-solving, the type of things that use computation and data science. We’re building out a unified platform for computation, and when we talk about computation, we mean the kinds of technical computing, like engineering calculations, data science and machine learning. It’s things like social network analysis, biosciences, actuarial science, and financial computations. Abstractly, these are all fundamentally mathematical things.”
“Our world is all those structured areas where we’ve spent 30 years building out different ontologies. We have a symbolic representation of the maths, but also things like graphs and networks, documents, videos, images, audio, time series, entities in the real world, like cities, rivers, and mountains. My team is doing the fun stuff of actually making it do something useful!”
“AI we just see as another kind of computation. There were different algorithms that have been developed over years, some of them hundreds of years ago, some of them only tens of years ago. Gen AI just adds to this list.”
Claims made about AI in 2024 can sometimes be overoptimistic, so we need to be realistic about its capabilities and consider what it excels at and where it falls short.
“There’s still human intelligence, which still remains as the strategic element. You’re not going to say, in the next five years AI will run my company and make decisions. Generative AI is very fluent but is unreliable. Its job is to be plausible, not to be correct. And particularly when you get into the kinds of things Wolfram does, it’s terrible because it will tell you the kinds of things that your mathematical answer would look like.” (Artificial Intelligence News‘ italics.)
The work of Wolfram Research in this context focuses on what Jon terms ‘symbolic AI’. To differentiate generative and symbolic AI, he gave us the analogy of modelling the trajectory of a thrown ball. A generative AI would learn how the ball travels by examining many thousands of such throws and then be able to produce a description of the trajectory. “That description would be plausible. That kind of model is data-rich, understanding poor.”
A symbolic representation of the thrown ball, on the other hand, would involve differential equations for projectile motion and representations of elements: mass, viscosity of the atmosphere, friction, and many other factors. “It could then be asked, ‘What happens if I throw the ball on Mars?’ It’ll say something accurate. It’s not going to fail.”
The ideal way to solve business (or scientific, medical, or engineering) problems is a combination of human intelligence, symbolic reasoning, as epitomised in Wolfram Language, and what we now term AI acting as the glue between them. AI is a great technology for interpreting meaning and acting as an interface between the component parts.
“Some of the interesting crossovers are where we take natural language and turn that into some structured information that you can then compute with. Human language is very messy and ambiguous, and generative AI is very good at mapping that to some structure. Once you’re in a structured world of something that is syntactically formal, then you can do things on it.”
A recent example of combining ‘traditional’ AI with the work of Wolfram involved medical records:
“We did a project recently taking medical reports, which were handwritten, typed and digital. But they contain words, and trying to do statistics on those isn’t possible. And so, you’ve got to use the generative AI part for mapping all of these words to things like classes: was this an avoidable death? Yes. No. That’s a nice, structured key value pair. And then once we’ve got that information in structured form (for example a piece of JSON or XML, or whatever your chosen structure), we can then do classical statistics to start saying, ‘Is there a trend? Can we project? Was there an impact from COVID on hospital harms?’ Clear-cut questions that you can approach symbolically with things like means and medians and models.”
During our interview, Jon also gave a précis of a presentation, which took as its example of his organisation’s work, an imaginary peanut butter cup manufacturing plant. What might be the effects of changing out a particular ingredient or altering some detail of the recipe and the effects of that change on the product’s shelf life?
“LLMs (large language models) will say, ‘Oh, they’ll probably last a few weeks because peanut butter cups usually sit on the shelf a few weeks. But going to a computational model that can plug into the ingredients, and compute, and you’ll know this thing should last for eight weeks before it goes off. Or what that change might do to the manufacturing process? A computational model can connect to the digital twin of your manufacturing plant and learn, ‘That will slow things down by 3%, so your productivity will fall by 20% because it creates a bottleneck here.’ LLMs are great at connecting you and your question to the model, maths, data science or the database. And that’s really an interesting three-way meeting of minds.”
You can catch Wolfram Research at the upcoming TechEx event in Amsterdam, October 1-2, at stand 166 of the AI & Big Data strand. We can’t guarantee any peanut butter-related discussion at the event, but to discover how powerful modelling and generative AI can be harnessed to solve your specific problems and quandaries, contact the company via its website.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post How cold hard data science harnesses AI with Wolfram Research appeared first on AI News.