A few weeks can be a long time in the engineering business just as it is in showbusiness. Indeed until recently, AI was perhaps mainly the domain of films.
At a recent family gathering, I was chatting to my nephew. He is a recruitment consultant in the engineering sector, so as you can imagine, has more than a passing interest in long-term trends in technical jobs.
Knowing that I have spent a working life in mathematical modelling and engineering simulation, he asked me if I thought that artificial intelligence (AI) would do away with the kind of jobs that my team and I do, running stress analysis or fluid flow simulations. I replied that I saw a distinction between automation and AI.
Repetitive tasks that rely solely on fixed sequences of operations and/or logical decisions are candidates for automation and some categories of “handle-turning” simulation might fall into this category. For example, if you offer a standardised product with geometry that can be characterised by key dimensions, then meshing, simulation set-up, job submission and post-processing could be entirely automated.
Indeed, the idea of “vertical apps” to front-end simulation packages is a large step in this direction and lends itself well to the category of users whose simulations are driven by demonstration of compliance with standards or regulations. So, returning to my nephew’s question, our task might become increasingly focussed on developing these automation procedures.
However, intelligent analysis, I asserted, was an entirely different matter, based as it is on judgement and experience.
Yes, I recall the buzz in the 1990s around expert systems that would capture this tacit knowledge and insulate employers against vulnerability to the loss of highly qualified staff, but I’ve not witnessed this paradigm having any impact in my sector.
This view was supported, in my mind at least, in an article on dynamic modelling in this month’s “The Chemical Engineer”, the IChemE’s house journal. In particular, the author asserts that, since physical laws are all approximate abstractions of physical reality, which is incredibly complex, and that these “laws” capture the essence of behaviour in which we are interested, the modeller’s task is to weed out what doesn’t matter, to identify what is important and to select the appropriate level of abstraction.
A good starting point, he suggests, is to look at the world around you and notice how things actually behave. Taking this view, simulation lends itself well to Wilde’s clients who use the technology to gain insights to steer product development and optimisation. Such innovators rely heavily on experience and engineering judgement – their own or ours. These aren’t the sort of attributes we are used to ascribing to computers.
Then two recent events have started to change my thinking.
First, Autodesk announced at the end of June that their AI-related Project Dreamcatcher research was going to becoming commercially available soon within our Additive Manufacturing-focused Netfabb software.
I will be very interested to see how this performs once our technical team has tested it out. Here is an animation from Autodesk’s website on Dreamcatcher ‘trying out’ some design alternatives:
Second, only a few days ago (Aug 1st) I read a brief item in The Telegraph reporting that an AI research team had decided to pull the plug on a pair of AI robots, who had developed their own mode of communication that the researchers could no longer understand.
Given this dramatic illustration of the potential of experiential learning and the notion that AI could increase exponentially, perhaps my activities will need a rethink in the future ……and I’ll need to have a more formal conversation with my nephew…
Simon Leefe, Technical Director, Wilde Analysis Ltd