Today’s Daily Brief with Enginuity

Round 2 here we go!
Its been another chaotic couple of days. The US stock market had its biggest single day loss since April off the back of a Donald trump tweet, but quickly recovered. Meanwhile, a ceasefire has been declared between Palestine and Israel, the gold price is now over $6000 AUD per ounce and Samsung has just made AI affordable.
Here’s what will be covered in today’s newsletter:
News Update: Samsungs new model, SORA 2, Open AI to make its own chips.
Tool of the Day: Gemini 2.5 Flash (A.k.a NanoBanana)
The Future of Human-AI systems
Application in Engineering: Visual AI for underground inspections
What’s been happening in AI?

Credit: Thinkstock
Samsung’s New 7M Parameter Model is Better Than Some Leading LLMs (Link) |
Typically it has always been thought that bigger models mean better results. This has led to companies developing models starting with millions of parameters (internal variables and calcs that guide a models responses), then billions, to now reaching over a trillion for cutting edge models like GPT-4 and similar.
Samsung however, has completely reversed this logic with a model that is more than 10,000x smaller than these giants, yet performs just as well on many benchmarks.
The trick lies in recursive thinking. The model starts by generating an inital answer before looping back through itself multiple times to refine the answer. It is much like writing a report before editing it half a dozen times.
This breakthrough introduces the potential for a whole new paradigm of models that are focused around reasoning. Whilst unlikely to replace ChatGPT or similar models (still requires extensive training compute and energy) this new architecture unlocks a faster and more accessible model that doesn’t require massive data centres or cloud infrastructure.
SORA Reaches Over 1M Downloads After its First 5 days in App Stores (Link). |
The new king of “AI slop” (AI generated content) has arrived and is honestly seriously impressive.
OpenAI’s SORA 2 app is the newest of the AI video generation platforms that represents a huge leap in realism with better modelling of physics, smoother transition and audio embedding.
The excitement behind the tool is being tempered by unease with many concerned about misuse, deepfake risks and ethical implications, not to mention the copyright and IP chaos surrounding it.
On the bright side, at least we have discovered how Jesus would fare in the Olympics:
OpenAI Partners with Broadcom to Build its Own AI Chips (Link) |
The collaboration aims to provide OpenAI with its own dedicated infrastructure development path, reducing its reliance on NVIDIA.
The deal is set to create 10 Gigawatts of computing capacity over 4 years. As a comparison, the biggest cities in the world including New York, Tokyo or Paris have total power demands in the range of 7-10 GW.
This follows trends from other established tech companies including Google, Amazon and Meta who have all made similar investments in AI infrastructure.
Tool of the Day: NanoBanana
You’ve likely tried playing around with AI image tools before, only to end up more fustrated than excited when the model gives people six fingers, or forgets how to count to ten. Gemini 2.5 Flash (NanoBanana), however, seeks to change this game completely.
Unlike traditional image generation models that treat editing as a kind of “regeneration with a prompt” task, nanobanana was designed for surgical edits with high speed, precision and context awareness. To break this into more detail:
The model has genuine spatial understanding of the image allowing it to edit only the pixels that require changing.
The model was designed to be lightweight,which enables it to generate edits quickly without sacrificing quality
Has a lightweight API that means it can easily be integrated into other products and programs without significant infrastructure requirements.
This sort of tool has wide ranging applications, from photography touch-ups and content cleanup to concept visualisation (e.g. before and after visuals) and sensitive information removal.

We all know the Mona Lisa, but did you know she was an engineer?
The Future of Human-AI systems
When AI enters conversations, there is a very real and justifiable fear that these tools we build might one day replace the roles we have spent years training for. Taking this into consideration, and acknowledging that AI is reshaping workflows and work environments at a rapid pace, it becomes important to recognise that the most powerful systems aren’t purely automated but instead need to be human-AI collaborations.
As engineers, the opportunity isn’t to compete with AI, but to use it to make our work smarter, safer and more impactful. Critical to this is considering how we integrate these sytems, which involves four key factors:
Scale: Is there enough data to make it work, and is the payoff worth it. This comes down not just to technical viability but also the ROI of a project
Accountability: For every workflow completed in our industry someone has to own the outcome. This person must therefore be accountable for collaborating with the AI model to ensure quality outputs through comprehensive validation. In practice, this can simply mean having an engineer edit outputs, or more proactively designing systems that keep humans in the loop to encourage active discussion, feedback, and reduce complacency.
Human Values & Decisions: Engineering is never purely technical. It is full of judgement calls, trade-offs and decisions driven of personal and corporate values. AI can help analyse and optimise, but it cannot capture every nuance of human priorities or ethics
Skill Development: With every process that is automated, we risk losing skills or hands-on knowledge. This is not always bad, but we should understand which business or enginering skills might be lost and especially comprehend how this might impact younger professionals.
Ultimately, the future of engineering isn’t human or AI, but instead is humans with AI. Systems developed for automation must be designed and implemented with this partnership in mind.
AI in Application
Deep underground, maintenance crews can often be found inspecting critical infrastructure ranging from mine shaft to train tunnels. Historically, engineers would engage crews to assess these narrow galleries, shining lights, inspecting concrete linings and hoping that no dangerous cracks are missed. This process is slow, presents signficant human safety risks, and is vulnerable to human fatigue and visual limitations.
Over recent years however, AI powered camera rigs or drones have begun to transform this work. These robots navigate through these spaces, equipped with trained visual AI models that can flag cracks of all sizes in real time, instantly processing the 360 degree images captured.
This sort of application can deliver major benefits:
Speed: AI can rip through thousands of images per hour to detect features faster than humans.
Risk: Less human presence in confined and hazardous tunnel spaces means fewer worker safety exposures.
Consistency: AI never tires and remains unbiased, applying the same thresholds, metrics and classifications throughout its assessment.
Quantitative Insights: Beyond classification advanced models could also present the opportunity to analyse key quantitiative metrics such as length, width and orientation, enabling richer and more actionable data.
Researchers and industry professionals alike have already started developing mobile tuneel scanning systems, using AI tech such as convolutional nerual networks and more recently LLMs, however much is still to be explored.
It’s a space that’s moving fast, and it’s packed with potential for anyone willing to bring LLMs into the real world of engineering.
For more information check out:
In Other News (There is just too much happening!)
Australia is banking on critical and rare earth minerals, working on a plan for a A$1.2 billion reserve operational by 2026 (Link)
CSIRO (particularly the Data61 arm for AI, robotics and data science) is restructuring and not renewing 100 contracts. (Link)
DroneShield invests $13M in Adelaide R&D facility for counter-drone electronic warfare (Link)
Engineers Australia just hosted an awesome panel discussion about engineering trust in the age of agentic AI with industry professionals from Aurecon, Autodesk and Arup (Link)
OpenAI’s GPT-5 has been found to reduce political bias in its responses by 30% (Link)
This newsletter seeks to engage and challenge the way engineers see AI and its potential for application in Industry. Any thoughts, questions or arguments are welcome!

