Large Language Models (LLMs) such as ChatGPT 5 and Grok 4 are becoming more capable and versatile. The real question is whether they can be used for serious work in automotive parts development.
LLMs can write, summarize, and even compose music. But automotive engineering demands more than creativity. It demands compliance, traceability, and precision. So the question is: Can LLMs generate compliant, traceable, and review-ready documentation that meets Automotive SPICE, ISO 26262, and ISO 21434 requirements?
To find out, we conducted a one-day experiment using LLMs to create an end-to-end draft for a Door Lock Control ECU.
The LLM Experiment
The goal was simple: to generate, within one day, a complete documentation draffor a small but safety-relevant subsystem—a car door lock controller.
The intention wasn’t to create production-ready data, but to evaluate how far AI could accelerate early V-Model phases—from requirements elicitation to testing and compliance documentation.
No external tools were used. No DOORS, no Integrity, no code generators. Just LLMs, text prompts, and office formats.
Because Volkswagen’s projects (with KGAS and Formel Q) are known for rigor, VW was chosen as the reference OEM. The work was time-boxed to one Saturday.
Customer Requirements (SYS.1)
Using ChatGPT 5.0 and Grok 4.0, I began with the customer requirements. No existing example was provided; everything was generated from scratch.
After several iterations, the core SYS.1 query became:
Develop a Door Lock Control ECU
Description: Controls electric door locks via key-fob signal or button input, with feedback.
Key points: Response < 500 ms, fail-safe unlock, ASIL A classification, signal authentication.
Deliverable: “Lastenheft-like” specification including ~50 requirements compliant with VW practice and KLH Gelbband 2023.
The resulting document contained 87 customer requirements, ready for trace-down to SYS.2.

Full list: https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.1
The Left-Side of V
System Specification (SYS.2)
SYS.2 system requirements were derived from SYS.1.

It took several iterations to ensure the system used sufficiently analyzed system requirements, including the verification criteria.
Example of a requirement:
| SysRS-079 | System shall unlock all doors on valid crash signal (e.g., pulse>5V for >10ms), verifiable by signal injection. |
| Status | Approved |
| Derived from (customer requirement) | REQ-II-5.6 |
| Safety Rating | ASIL A |
| Priority | High |
| Risk | High |
| Verification Method | Test |
| Test Level | System |
| Discipline | SYS/HW |
| Verification Criteria | Signal injection passed; 100% unlocks on pulse>5V for >10ms; no misses. |
See https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.2 for the complete document.
System Architecture (SYS.3)
SYS.3 (system architecture) was derived from SYS.2 and contained a textual architecture plus a simple LLM-generated block diagram (a separate query). Though basic, it demonstrated consistent traceability and structure typical for ASPICE-compliant work.

The result was, at best, a glimpse of the architecture, but it gave at least a rough idea of the system architectural design. A full-blown architecture could be further refined using SysML (see SWE.2 examples).
SYS.3 full set: https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.3
Software Requirements (SWE.1)
33 SWE.1 (software requirements) requirements were derived from SYS.2 and SYS.3, retaining the traceability from both levels.

| SwRS-001 | Software shall implement the finite-state machine with states Locked, Transition, Unlocked, handling retries and watchdog recovery. |
| Status | Approved |
| Trace from SYS.2 | SysRS-061; SysRS-072; SysRS-093; SysRS-100; SysRS-114; SysRS-109; SysRS-110 |
| Trace from SYS.3 | ARC-STM-003; ARC-SCN-001/002/004 |
| Category | Functional |
| Priority | High |
| Risk | Medium |
| Verification Method | SIL model test + HIL timing |
| Discipline | SW (meaning: no FuSa or Cybersecurity) |
| Verification Criteria (KGAS-compliant) | All transitions executed ≤500 ms; retries ≤3; on watchdog/reset → fail-safe unlock; 100% state/transition coverage |
Similar to SYS-levels, we had to iterate several times to achieve a more realistic level of granularity in the software requirements derived from the SYS.2/SYS.3 documents.
See https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.1
Software Architecture (SWE.2)
Software Architecture was derived from SWE.1, resulting in a textual of the architecture specification, commonly used in many ASPICE-compliant projects. ChatGPT automatically added relevant aspects of the software architecture:
- SW components
- SW interfaces
- Dynamic aspects
- State machines
- SW data types
- Traceability
- Non-functional requirements elements (e.g., cybersecurity)
In addition, LLMs proposed using PlantUML diagrams and generated them.

(Complete SWE.2, including the images and the PlantUML diagrams, see https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.2)
SW Detailed Design (SWE.3)
SWE.3 elements … were created on two levels:
- Software detailed design
- Software Units
The detailed design came out very simplified, but we did not drill down further, as our intention was—as throughout the document—to generate a “proof of concept” methodology. Even using a simple prompt, ChatGPT was able to derive requirements, ChatGPT was able to generate a more comprehensive specification in one Excel book, including
- Module Units
- API
- Algorithms
- Data dictionary
- Error handling
- Calibration
- Unit test hooks
- Traceability records
For the very first iteration of the detailed design documentation, the result was pretty impressive.

(see https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.3/DD for the full content).
The LLM also generated units as C source code.
Software Units were traced from detailed design (SWE.2) and software requirements (SWE.1). The resulting example (C-code) consisted of 34 units.
Example of a simple CAN adapter:
/* ==========================================================================
* Unit: can_adp (SWU-011) - implementation
* Trace: SWE.1 SwRS-010, SwRS-019, SwRS-025 | SWE.2 SWD-CMP-011
* ========================================================================== */
#include "can_adp.h"
#include <string.h>
#define MAX_ID 0x7FFu /* placeholder allowed range */
static uint8_t s_last_seq_table[16] = {0}; /* LUT by (id & 0xF) */
static bool id_allowed(uint32_t id)
{
/* Placeholder policy: standard ID range only */
return (id <= MAX_ID);
}
static bool crc_ok(const CANMsg_t* m)
{
/* Placeholder CRC check stub: always true, to be replaced with real CRC */
(void)m;
return true;
}
bool CAN_Validate(const CANMsg_t* m)
{
if (m == 0) return false;
if (!id_allowed(m->id)) return false;
if (m->dlc > 64u) return false;
if (!crc_ok(m)) return false;
uint8_t* pseq = &s_last_seq_table[m->id & 0xFu];
uint8_t last = *pseq;
if (m->seq == last) {
return false; /* duplicate */
}
/* allow wrap-around; only reject if strictly older */
if ((uint8_t)(m->seq - last) > 200u) {
return false;
}
*pseq = m->seq;
return true;
}
Like the rest of the example, it is an exceedingly simplified code, but it appears at least syntactically correct.
(Complete specification is located here: https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.3/Unit%20Construction
The Right Side of V
System Qualification Test (SYS.5)
Derived from system requirements (SYS.2), a complete set of system test cases were generated, based on requirements and therein specified verification criteria.

In this example (SYS-TC-007), LLM created the following values for this test case:
| SYS-TC-007 | Crash Unlock Timing |
| Purpose | Force unlock on crash within time budget. |
| Priority | High |
| ASIL | A |
| Cybersecurity | No |
| Pre-condition | State=Locked; Crash_Line controllable; – Power Supply: programmable 0–16 V, ripple <50 mV – DMM/ADC tap for VBAT – Oscilloscope (≥1 MS/s) on Motor_En, Motor_PWM, OCSense – CAN interface (FD-capable) logs @500k/2M, IDs per DBC – RF TX emulator with frame scripting – Digital IO to assert Crash_Line – Time sync via PPS or shared trigger |
| Test Steps | 1) Scope CH1=Crash_Line, CH2=Motor_En. 2) Arm single-shot trigger on Crash_Line rising. 3) At T0, assert Crash_Line HIGH. 4) Measure Motor_En rising at T1; compute start latency T1-T0. 5) Verify completion and status (CAN 0x5A1) at T2/T3. |
| Expected Results | Unlock actuation starts quickly; completes; status reported. |
| Acceptance Criteria | Start latency ≤ 100 ms in 10/10 trials; status publish per normal (≤100 ms after completion). |
| Verification Method | Test |
| Environment | HIL/Vehicle |
| Status | Planned |
| Trace to SysRS | SYSRS-007 SYSRS-024 |
As a nice by-product, ChatGPT automatically generated a traceability record as a requirements test coverage metric

(Complete system test catalog: https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.5)
SYS.4, SWE.6, SWE.5, and SWE.4
The remaining test cases on the right side of V have been derived from the respective levels (SYS.3, SWE.1, SWE.2, and SWE.3) in the same way.
See the remaining test catalogs:
- SWE.5 SW integration test: https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.5
- SWE.4 unit tests https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/SWE.4/Door_Lock_Control_ECU_SWE4_Unit_Test_Catalogue.xlsx
Quality Assurance (SUP.1)
Using Grok, we calculated a very simplified traceability coverage, which revealed a few gaps.
It appears realistic to expand the traceability report to include more complex traceability concepts, but we have not dived into the traceability aspect any further.
After a complete iteration of the door lock system, we also audited the results using Grok 4.0 to identify consistency and traceability gaps, which suggested the potential to automate quality assurance.
(See https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/Door_Lock_Control_Traceability_Coverage.xlsx)

(See https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/2025-10-13_Audit%20Report.docx )
Looking at the generated system specification from a quality perspective, we wondered which gaps and improvements would be necessary to enhance its quality. Using a single prompt, we generated a high-quality report.

This is an excerpt of the audit findings. See https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/2025-10-13_Audit%20Report.docx for the complete document.
The result is, of course, very simplified and most likely incomplete. However, it demonstrated the potential of using LLM-generated documentation for quality assurance and compliance.
Key Observations
We were able to create a “zero draft” of the specification documents required by ASPICE at the SYS and SWE levels, including full traceability, in just one day. The results were impressive at first sight but overly superficial at second sight. However, we must not forget that the documentation was generated in a few hours of work, and even that kind of simple, comprehensive draft would already cost weeks to achieve, at a cost of dozens of thousands of dollars.
The best way to work with LLMs is to Iiterate on the models until the desired level of quality is achieved..
The central insight is that the future of automotive development will be hybrid, combining human expertise with AI-generated work products.
Conclusion
This experiment proved that LLMs can produce consistent, compliant, structured, and traceable documentation for an automotive subsystem in a single day. While far from replacing engineers, they can jump-start the development process and ensure a consistent baseline across the V-model.
Correctness, reasoning, and tool integration remain open challenges—but the trajectory is clear. With proper human oversight, AI will become a standard part of the automotive engineering toolkit, reshaping how projects start and how compliance is achieved.
The CORE SPICE approach captures this philosophy: Automate everything that can be automated—and let humans focus on what truly matters.
Reference
[1] All files created in this example: https://github.com/CORE-SPICE/DOORLOCK_DEMO

[2] VDA KLH: https://vda-qmc.de/wp-content/uploads/2023/11/KLH_Gelbband_2023_EN.pdf
[3] KGAS 4.2 (not publicly avaliable on the net)
I am a project manager (Project Manager Professional, PMP), a Project Coach, a management consultant, and a book author. I have worked in the software industry since 1992 and as a manager consultant since 1998. Please visit my United Mentors home page for more details. Contact me on LinkedIn for direct feedback on my articles.
