Challenges facing AI in science and engineering

Challenges facing AI in science and engineering

One exciting possibility offered by artificial intelligence (AI) is its potential to crack some of the most difficult and important problems facing the science and engineering fields. AI and science stand to complement each other very well, with the former seeking patterns in data and the latter dedicated to discovering fundamental principles that give rise to those patterns. AI and science can greatly increase the speed of engineering innovation and scientific productivity. For example:

  • Biology: AI models such as DeepMind’s AlphaFold offer the opportunity to discover and catalog the structure of proteins, allowing professionals to unlock countless new drugs and medicines.
  • Physics: AI models are emerging as the best candidates to handle crucial challenges in realizing nuclear fusion, such as real-time predictions of future plasma states during experiments and improving the calibration of equipment.
  • Medicine: AI models are also excellent tools for medical imaging and diagnostics, with the potential to diagnose conditions such as dementia or Alzheimer’s far earlier than any other known method.
  • Materials science: AI models are highly effective at predicting the properties of new materials, discovering new ways to synthesize materials and modeling how materials would perform in extreme conditions.

These major deep technological innovations have the potential to change the world. Data scientists and machine-learning engineers face significant challenges to make sure their infrastructure and models achieve the desired change.


A key part of the scientific method is being able to interpret both the working and the result of an experiment and explain it. This is essential to enabling other teams to repeat the experiment and verify findings. It also allows non-experts and members of the public to understand the nature and potential of the results. If an experiment cannot be easily interpreted or explained, then there is likely a major problem in further testing a discovery and also in popularizing and commercializing it.

When it comes to AI models based on neural networks, we should also treat inferences as experiments. Although a model generates an inference from patterns that it observes, there can still be some randomness or variance in the result. This means that understanding a model’s inferences requires the ability to understand the intermediate steps and the logic of a model.

This is a problem facing AI models that leverage neural networks. Many of them currently function as “black boxes”. The steps between data input and data output aren’t labeled and it’s not possible to understand “why” the model gravitated towards a certain inference. This is an issue that makes it difficult to explain the inferences of AI models. This could limit the understanding of what an AI model does to both data scientists who develop them and devops engineers responsible for their deployment on their storage and computing infrastructure. This in turn creates a barrier to the scientific community being able to verify and peer review a finding.

This is also a problem when trying to commercialize or spin off the results of your research outside the laboratory. Researchers that want to get regulators or customers on board will find it difficult to get buy-in for their idea if they can’t clearly explain why and how they can justify their discovery in a layperson’s language. And then there’s the issue of ensuring that an innovation is safe for use by the public, especially when it comes to biological or medical innovations.


Another core principle in the scientific method is the ability to reproduce an experiment’s findings. Scientists can reproduce an experiment to verify that the result was not falsified or accidental. It also allows them to confirm that the hypothesis behind a phenomenon is correct. This provides a way to “double-check” an experiment’s findings, ensuring that the broader academic community and the public can have confidence in the accuracy of an experiment.

However, AI has a major issue in this regard. Models can produce markedly different outputs if they are subject to minor tweaks in their code or structure. This can make it difficult to have confidence in a model’s results.

However, reproducibility can also make scaling up models extremely challenging. It can be very hard to scale up a model that is not flexible in code, infrastructure or inputs. That’s a huge problem to moving innovations from the lab to industry and society at large.

Escaping the theoretical grip

The next issue is a less existential one — the embryonic nature of the field. Although papers are constantly being published about AI in engineering and science, many are still very theoretical and don’t really care much about translating lab discoveries into real-world applications.

This is an inevitable and important phase for most new technologies, but it’s illustrative of the state of AI in science and engineering. AI is poised to make amazing discoveries. However, most scientists still view it as an instrument that can be used in labs, and not for generating new, transformative ideas for the rest of the world.

Ultimately, this is a passing issue, but a shift in mentality away from the theoretical and towards operational and implementation concerns will be key to realizing AI’s potential in this domain, and in addressing major challenges like explainability and reproducibility. If we are serious about scaling AI beyond the laboratory, AI will help us achieve major scientific and engineering breakthroughs.

Rick Hao is the lead deep tech partner at Speedinvest.

The post Challenges facing AI in science and engineering appeared first on Venture Beat.