Job Function Task Mapping

Which Tasks, Which Models, What Cost

98 daily enterprise tasks mapped to 4 AI model categories with per-task cost calculations across 2024-2030. The granular layer beneath the disruption map.

14
Job Functions
98
Tasks Mapped
$0.35
Daily Cost/Worker (2025)
$0.025
Daily Cost/Worker (2030)
Part VI — The Enterprise Disruption
Chapter 21: Task Mapping

Moving from industry-level disruption to the granular reality of enterprise work: 98 tasks mapped across 14 job functions, each matched to the right model tier and cost profile. This is where strategy meets execution.

Chapters 19 and 20 mapped disruption at the industry and SaaS-category level — which sectors face transformation, when each category falls, and the model types driving displacement. But enterprises do not adopt AI at the industry level. They adopt it task by task, workflow by workflow, role by role. A Financial Analyst performing expense categorization 60 times per day has fundamentally different AI requirements than the same analyst drafting a cash flow projection narrative twice a day. The first task is a classification problem ideally served by a $0.04-per-million-token fine-tuned model. The second requires a $2.50-per-million-token reasoning model. Routing both to the same frontier model wastes money on the first and adds no value to the second — yet the enterprise pays 5x more than it needs to.

This chapter presents the task-level analysis that underpins the disruption maps. Across 14 enterprise roles — from Support Agent to Corporate Counsel, from Software Developer to Supply Chain Analyst — we map 98 specific daily tasks to four model categories (Large Commercial, Small Commercial, Open Source, and Fine-Tuned), calculating the cost per invocation and the cost per day for each. The central finding: 57% of enterprise tasks are optimally served by Small Commercial or Fine-Tuned models. Only 32% genuinely require Large Commercial reasoning. Enterprises routing all tasks to GPT-4o or Claude Sonnet spend 3.2x more than necessary — a finding with direct implications for AI budgeting, vendor selection, and the routing infrastructure that Chapter 22 will address.

The cost trajectory makes this even more urgent. By 2030, 68% of all enterprise AI tasks will cost less than $0.001 per invocation — effectively zero marginal cost. The strategic question is no longer "can we afford AI?" but "can we afford not to route intelligently?"

57%
Tasks optimal for Small or Fine-Tuned models
$0.008
Average cost per task invocation (2025)
3.2x
Overspend with all-Large routing
68%
Tasks <$0.001 by 2028
Large Commercial
Small Commercial
Open Source
Fine-Tuned

1. Task Complexity Distribution by Department

Percentage of simple, medium, hard, and frontier tasks in each department

Customer Service has the highest share of simple tasks (57%), while Legal has the highest share of hard+frontier tasks (57%). This directly maps to model routing strategy.

The Complexity Pyramid in Practice

The task complexity distribution across 98 enterprise tasks validates the 80/15/5 pyramid from the Small Models thesis. 33.7% of tasks are classified as simple (classification, extraction, templated generation), 41.8% as medium (structured analysis, summarization, moderate generation), 23.5% as hard (multi-step reasoning, nuanced judgment), and just 1% as frontier (cutting-edge reasoning such as litigation risk assessment). The combined simple-plus-medium share of 75.5% closely matches the predicted 80% — with the slight overweight in hard tasks reflecting this analysis's focus on knowledge workers.

The practical consequence is immediate: Customer Service has the highest simple-task share (57%), meaning the vast majority of support interactions can be handled by fine-tuned models at $0.000028 per invocation. Legal sits at the opposite extreme, with 57% hard-plus-frontier tasks that demand reasoning-class models costing 2,000x more per invocation. The model routing strategy for these two departments should be fundamentally different — yet most enterprises treat them identically.

2. Daily AI Cost: Optimal Routing vs All-Large

Cost difference between intelligent model routing and using Large Commercial for everything

Support Agent shows the most dramatic savings: 24x overspend eliminated by routing simple tasks to Fine-Tuned models. Operations Manager at 20.5x is second.

3. Optimal Model Distribution by Role

How many tasks in each role are best served by each model category

The Overspend Problem: Why Model Routing Is a Budget Issue

The overspend ratios tell the core story of this analysis. A Support Agent routed entirely through Large Commercial models costs $0.84 per day in AI spend; optimal routing drops that to $0.035 — a 24x reduction. An Operations Manager drops from $0.41 to $0.02 (20.5x). Even a Software Developer, whose tasks genuinely require more reasoning capability, drops from $2.28 to $1.20 (1.9x). Across a 330-person enterprise, the difference between all-Large routing and optimal 4-tier routing is $15.94 per day versus $4.96 — a 69% savings that compounds to tens of thousands of dollars annually.

The pattern is consistent: the highest overspend occurs in roles dominated by high-volume simple tasks, where fine-tuned models costing $0.04 per million tokens can match or exceed the accuracy of frontier models that cost $2.50 per million tokens. A fine-tuned 7B model trained on a company's historical ticket data achieves 96% accuracy on ticket routing — six points higher than GPT-4o — at 1/60th the cost.

4. Cost Per Task Deep Dive

Select a role to see per-task costs across all 4 model categories (2025 prices)

5. Cost Evolution 2024-2030

How per-invocation costs decline across representative tasks (log scale)

By 2030, even the most expensive task (contract review) drops below $0.005/invocation. Simple tasks approach $0.000002 — effectively free.

The Cost Trajectory: From Dollars to Fractions of Cents

The cost evolution from 2024 to 2030 follows the Intelligence Yield curve with remarkable precision. A simple FAQ answer costs $0.000048 today; by 2030 it will cost $0.0000016 — thirty times cheaper. Contract review, the most expensive common enterprise task at $0.088 per invocation today, drops to $0.004 by 2030. The average cost per task invocation falls from $0.008 in 2025 to $0.0006 in 2030 — a 93% reduction.

The milestone dates matter for planning. By 2026, 55% of tasks fall below $0.001 per invocation and fine-tuned models reach $0.00001 for simple tasks. By 2028, 68% of tasks cross that threshold, and even contract review drops to $0.014. By 2030, the average daily AI cost per worker reaches $0.025 — less than the cost of a single piece of paper. AI becomes a rounding error in per-employee budgets, and the question of "can we afford AI" becomes purely about routing intelligence, not purchasing it.

6. Routing Savings by Role

Annual savings from intelligent model routing vs all-Large strategy (per worker, 250 days/year)

7. Task Volume vs Cost

Each bubble is a task. X = daily invocations, Y = cost per invocation (optimal model). Size = total daily cost. Color = complexity.

The highest-volume tasks cluster in the bottom-right (cheap + frequent). The expensive outliers in the top-left are hard/frontier tasks with low daily volume.

The Volume-Cost Inverse: A Law of Enterprise AI

The scatter plot of task volume versus cost per invocation reveals a near-universal law of enterprise AI economics: the highest-volume tasks are the cheapest to automate, and the most expensive tasks are the rarest. Nine of the top ten highest-volume tasks cost under $0.0002 per invocation. Ticket classification (100 per day), FAQ answering (80 per day), customer sentiment analysis (60 per day), expense categorization (60 per day) — these are the routine cognitive backbone of enterprise work, and they are trivially cheap to automate with fine-tuned models.

The single exception is code generation at 40 invocations per day and $0.015 per invocation — the highest-cost high-volume task in the enterprise. It is this anomaly that explains why programming consumes more than 50% of all AI tokens globally and why Cursor reached $1.2 billion ARR faster than any SaaS product in history. Code generation sits at the intersection of high volume and genuine complexity, demanding reasoning-class models that cannot yet be replaced by smaller alternatives without meaningful quality degradation.

8. Department Deep Dive

Select a department to see its model mix, top costs, and task breakdown

9. Complete Task Reference

All 98 tasks with model mapping, costs, and suitability scores. Click column headers to sort.

What Comes Next

The task-level mapping in this chapter reveals the granular economics of enterprise AI: which tasks, which models, what cost. The central finding — that 57% of tasks are optimally served by small or fine-tuned models, and that enterprises overspend 3.2x by routing everything to frontier models — points directly to the strategic imperative of Part VI. In Chapter 22, we turn to the Intelligence Routing Revolution: the infrastructure, algorithms, and organizational designs that enable enterprises to match the right model to the right task at the right cost, dynamically and at scale. Routing is not a technical feature; it is the mechanism that converts the theoretical savings identified in this chapter into realized enterprise value.