Issue #1 ·
Physical AI Safety Dispatch — April 2026
First issue. Three posts. One exclusive insight. The gap between robot intelligence and robot safety infrastructure is not closing — it's widening.
I started writing about Physical AI safety on LinkedIn in late April 2026. Three posts a week. Tuesday, Wednesday, Thursday. No product. No company. Just the analysis I wish someone else had already published.
This is the first issue of the monthly dispatch — a digest of what I wrote, what I'm reading, and one insight I won't post anywhere else.
Three posts that got the strongest response
1. The missing category
I tried to draw a chart with two lines. Robot intelligence investment versus robot safety infrastructure investment, 2015–2025.
I couldn't. Because one line doesn't exist.
Robotics venture funding in 2025: approximately $8 billion. Tracked by Crunchbase, PitchBook, OECD. Hundreds of reports. Dozens of categories.
Robot safety infrastructure funding in 2025: no category exists. No tracker. No data series. I searched every major VC database. The category is absent from all of them.
The absence of data is the data.
Sources: Crunchbase Robotics Funding Data (2025); OECD AI Investment Report (February 2026).
2. Six robot incidents. One pattern.
Six publicly reported robot incidents between 2021 and 2025. Different sectors. Different manufacturers. Different countries. One common thread: safety architecture treated as a layer added at the end, not a constraint designed in from the start.
- 2021 — Tesla Giga Texas: robot arm pinned an engineer against a surface.
- 2023 — Tesla Fremont: $51 million lawsuit after workplace injury.
- 2023 — Goseong, South Korea: industrial robot fatality during inspection.
- 2024 — Ecovacs Deebot X2: hacked robot vacuums shouted slurs at owners.
- 2024 — OSHA study: 77 robot-related workplace accidents over four years. 60% caused by "unexpected activation."
- 2026 — Tesla Austin: 14 robotaxi crashes in first months of supervised deployment.
The pattern isn't that robots are dangerous. The pattern is that the safety layer between the AI and the physical world is almost always software — and software fails with the system it's supposed to be watching.
Sources: AP News, CBS News, ABC News, Manufacturing Dive, Claims Journal, ScienceDirect (OSHA study).
3. SIL — four levels of safety integrity
IEC 61508 is the mother standard for functional safety. Published by the International Electrotechnical Commission. It defines four Safety Integrity Levels — SIL 1 through SIL 4.
SIL 1 — Basic diagnostics. Simple failure detection. Single-channel architectures can qualify. Typical applications: HVAC controls, basic monitoring.
SIL 2 — Redundancy often required. Diagnostic feedback. Most SIL 2 designs use dual-channel architectures. Typical applications: food processing, industrial robotics.
SIL 3 — Dual-channel minimum. Advanced diagnostics. Hardware fault tolerance of at least 1. The system must survive one hardware failure and still perform its safety function. Typical applications: medical devices, energy protection systems.
SIL 4 — The highest level. Triple redundancy. Hardware fault tolerance of at least 2. Fail-operational. Typical application: nuclear plant control systems.
Most robots shipping in 2026 have no SIL rating at all. Not SIL 1. Not any level. They haven't been assessed against the standard.
The EU Machinery Regulation 2023/1230 changes this from 20 January 2027.
Sources: IEC 61508; TÜV SÜD; EKTOS functional safety guidelines.
What I'm reading this month
IEC 61508 Edition 2.0 — the full standard, not the summary. Seven parts. The requirements for software in safety-related systems (Part 3) are where most Physical AI companies will struggle first. The standard assumes you can verify every execution path. Neural networks don't work that way.
EU Machinery Regulation 2023/1230 — the full text on EUR-Lex. Article 6 and Annex I are where the new AI-specific requirements live. The regulation applies from 20 January 2027. There is no transition period. The old Machinery Directive 2006/42/EC simply stops applying on that date.
What's coming in May
Israeli Physical AI ecosystem map. The Israel Innovation Authority's April 2026 strategy report identified 123 companies. I think the real number is closer to 150–160. I'm building the expanded map — by category, by funding stage, by safety readiness.
Insurance and liability deep dive. When a robot causes injury, who pays? The manufacturer? The deployer? The AI model provider? The answers are different in the EU, the US, and Japan. And the insurance industry is starting to notice.
EU regulatory sandbox analysis. Singapore and the UAE already have AI regulatory sandboxes. The EU's will be required by August 2026 under the AI Act. What this means for Physical AI companies trying to certify novel architectures.
A note I won't post on LinkedIn
I spent two weeks reading the Israel Innovation Authority's April 2026 AI strategy report. 123 Physical AI companies. $12 billion in venture capital. 2,347 patents in computer vision alone. Impressive numbers. Then I did something the report didn't do — I checked how many of those 123 companies mention IEC 61508, SIL, or functional safety anywhere on their website. Fewer than 30. The government mapped the ecosystem by capability. Nobody mapped it by safety readiness. That's the map I'm building next.
— Mati
Physical AI Safety Dispatch is a monthly newsletter by Mati Melchior. Published on the 1st of every month. Follow the weekly analysis on LinkedIn and X.
Read more at physical-ai-safety.com.