SAP's Tabular AI model is built for business data

A message from: SAP

SAP
Forget chatbots — SAP has developed an enterprise AI model that speaks spreadsheet.
SAP Chief Technology Officer Philipp Herzig breaks down how SAP-RPT-1 — a table-native foundation model — is redefining enterprise AI. Unlike traditional LLMs trained on internet text, SAP-RPT-1 is built from the ground up to understand structured business data like ledgers and invoices. Herzig explains how the model reduces errors, accelerates predictions and enables teams to act on insights without retraining.
1. First things first: What is SAP-RPT-1 and how does a table‑native "Relational Pretrained Transformer" differ from an LLM that handles text?
Herzig: SAP-RPT-1 is a foundation model built specifically for tabular business data –ledgers, invoices, inventories and other relational records. Importantly, it's not an LLM. LLMs are trained on public language and excel at text and images; RPT-1 is engineered to read rows, columns, joins and business datatypes natively.
That matters because mission‑critical enterprise data is private and structured; SAP's decades running customer systems give us unique domain knowledge to pretrain a model that understands table semantics. In practice this specialist design reduces format‑related errors and hallucinations, runs far more efficiently and is better suited to enterprise prediction tasks than shoehorning tables into text for an LLM.
2. Next up: Why do enterprises need a model specifically built for tabular, relational business data — what practical problems does SAP-RPT-1 solve that more general LLMs struggle with?
Herzig: Enterprises historically build many bespoke models — one for forecasting, one for collections, one for lead scoring — each with long development cycles, high data requirements and heavy ops.
RPT‑1 replaces that fragmentation with a single table‑native engine that handles many prediction tasks without per‑task retraining. It avoids the fragile step of converting tables into text (which induces errors), leverages SAP's decades of experience running our customers' most important business applications, and lets analysts get predictions interactively.
For a new retailer, for example, RPT‑1 can combine a few local sales examples with learned retail seasonality to estimate holiday uplift by region — producing actionable stocking guidance in days, not months.
3. The breakdown: How does SAP-RPT-1's in‑context learning work in practice? What does a business user need to provide to get an accurate prediction, and how quickly can they get results?
Herzig: In‑context learning means you teach the model by example rather than retrain it. A user provides a small set of labeled rows (e.g., invoices tagged "paid on time" vs. "late") and the rows to score. RPT‑1 merges those examples with its pretrained knowledge of table patterns and generalizes immediately.
The result is interactive prediction: analysts typically get reliable outputs in, and can iterate quickly by adding examples, correcting errors, etc. The number of examples required depends on task complexity, but often a handful to a few dozen suffices.
We strongly believe in the importance of having a human in the loop with Business AI, so governance remains essential: users should validate data and keep humans in the loop before operational use.
4. Worth a mention: How does SAP-RPT-1 compare on accuracy, latency and cost to task‑specific narrow models and to prompting LLMs for the same tabular tasks?
Herzig: On SAP benchmarks and domain tests, RPT‑1 often outperforms optimized narrow models –in some domains cutting errors roughly in half – and substantially beats state‑of‑the‑art LLM prompting for tables. It is also far more efficient: using an LLM as a table solver can require orders of magnitude more compute and much higher latency, making LLMs impractical for high‑volume, time‑sensitive enterprise scenarios. In short, RPT‑1 delivers higher prediction quality with lower latency and infrastructure cost, enabling real‑time scoring and broader operational adoption.
5. More info: Where and how can customers run SAP-RPT-1 safely?
Herzig: Two versions of SAP-RPT-1 are now available on SAP's generative AI hub: sap-rpt-1-small is ideal for medium complexity prediction scenarios and when low latency and high prediction throughput is the primary purpose. sap-rpt-1-large is for complex prediction scenarios when best prediction quality and lowest error rates are the main goals.
In addition, there's an open‑source variant (sap-rpt-1-oss) for experimentation. Customers can test SAP-RPT-1 today with their own data or SAP-provided samples via the SAP-RPT-1 playground, an interactive testing environment accessible at rpt.cloud.sap.
6. Some examples: What concrete enterprise use cases and early results demonstrate measurable ROI (for example in finance, supply chain or sales), and what metrics should customers track?
Herzig:
- SAP-RPT-1 delivers measurable ROI in high‑volume, error‑prone operational workflows such as finance (auto‑coding invoices), supply chain (predicting which shipments are at risk during disruptions) and sales (prioritizing campaign leads). I mentioned that our benchmarking shows substantial error reductions, which directly lowers exceptions, manual rework and correction costs while speeding decision cycles.
- To quantify value, we track model quality (error rate and improvement versus baseline), operational impact (exceptions avoided, time‑to‑insight, throughput) and business outcomes (cash‑flow improvement, reduced stockouts, conversion lift, cost per prediction and payback period).
7. Looking ahead: How will SAP-RPT-1 evolve, and what should enterprises do now to be ready to adopt tabular generative AI responsibly?
Herzig:
- Looking ahead, SAP-RPT-1 will advance on three fronts: stronger table‑native capabilities, tighter integration into SAP Business Technology Platform and application workflows, and richer machine learning operations.
- In 2026, businesses will see the massive value waiting to be unlocked from their tabular business data. To be ready, organizations can take steps to inventory and classify critical tabular datasets, pick a well‑scoped high‑value pilot with clean labels and baseline KPIs, and deploy validation, human‑in‑the‑loop gates and continuous monitoring while aligning security and compliance. Those steps let you scale quickly and responsibly once pilots prove value.