Featuring the NOVA Framework
Human Contribution, Organizational Value, and the New Science of Job Evaluation in the Age of AI and Adaptive Organizations
About the Book
Job evaluation has guided compensation and organizational design for nearly a century. Today, its foundational assumptions about stable roles, linear hierarchies, and technology as mere background have been dismantled by five converging forces that no incremental adjustment can address.
This book traces the discipline from its industrial origins to its present crisis, and proposes a working answer: the NOVA Framework, a 78-point, six-factor methodology built for the age of AI, fluid roles, and governance complexity.
"The frameworks we have relied upon for decades find themselves structurally misaligned with the reality they attempt to measure."
When artificial intelligence executes the tasks that once defined a role, what remains for the human to contribute, and how should that contribution be measured and rewarded?
CHROs reimagining total rewards architectures. Compensation professionals seeking defensible alternatives. Organizational designers navigating AI adoption. M&A practitioners harmonizing disparate grade structures.
NOVA explicitly measures the Human-AI Orchestration layer: the capacity to direct, interrogate, and override intelligent systems as a primary determinant of role value. No legacy framework captures this dimension.
Why Legacy Fails
Machines now execute the task-based activities that traditional evaluation was designed to measure, fundamentally redefining what human contribution means.
Static job descriptions are yielding to dynamic role portfolios and cross-functional mandates that no fixed job description can adequately capture.
Geographic decoupling has dismantled supervision as a proxy for accountability. Value is now measured through outcomes and networked influence.
The half-life of professional expertise is compressing. Learning velocity and adaptive capacity now outrank accumulated knowledge as value drivers.
Roles now carry algorithmic accountability, data stewardship, and ESG responsibilities that legacy frameworks were never designed to evaluate.
The NOVA Framework
NOVA is a 78-point, six-factor job evaluation methodology developed through years of consulting practice across industries and geographies. It evaluates what roles produce and influence, not merely what they do, and explicitly recognizes the human-AI orchestration layer as a primary differentiator of role value.
"NOVA is not presented as a theoretical ideal but as a working methodology. It is designed to evaluate what roles produce and influence, not merely what they do."
Assesses the scope and consequence of the role's contribution to organizational outcomes, financial accountability, and strategic alignment.
Measures the nature and difficulty of thinking required, from routine execution to ambiguous, multi-variable challenges demanding original synthesis under uncertainty.
Evaluates the depth, breadth, and currency of knowledge required, including learning velocity and resistance to knowledge obsolescence.
The factor legacy frameworks omitted. Evaluates a role's capacity to leverage, direct, and govern AI and automated systems, the central differentiator in AI-augmented environments.
Captures the reach of a role's influence across formal and informal networks, beyond headcount to collaboration, alignment, and institutional persuasion.
Recognizes compliance, data stewardship, ESG responsibilities, and risk management as explicit, measurable dimensions of role worth in regulated and AI-enabled environments.
"Built for continuous calibration, because any evaluation system with a five-year shelf life is obsolete before its first review cycle."
Who This Is Written For
Reimagining total rewards architectures and seeking evaluation frameworks that match the organizational reality AI is creating.
Seeking defensible, evidence-based alternatives to aging point-factor systems that no longer reflect the work they are grading.
Wrestling with structural implications of AI adoption on grade architecture, role families, and career progression frameworks.
Who recognize that how an organization values work is inseparable from its ability to attract, develop, and retain talent.
Who must rapidly harmonize disparate organizational structures into cohesive, equitable enterprises during and after integration.
Building the infrastructure to evaluate, reward, and develop human contribution where AI is an active collaborator.
A structured 45-minute e-learning program designed for HR professionals, compensation specialists, and transformation leaders who need to evaluate, implement, and advocate for modern job evaluation practice in AI-augmented organizations.
Understand the structural failures of legacy job evaluation frameworks
Apply the NOVA Framework's six-factor methodology to evaluate roles
Score and interpret the Human-AI Orchestration factor
Design an implementation roadmap for NOVA in your organization
Receive a KAN-certified digital credential shareable on LinkedIn, Instagram, and Facebook
Enroll below with your professional details. The program takes approximately 45 minutes and can be completed in a single session or across multiple visits.
Step 1 of 4 - Registration
Please use your professional email address. Your certificate will be issued in your name as entered below.
Your information is used solely to issue your certificate and will not be shared with third parties.
Job evaluation is the systematic process of determining the relative value of jobs within an organization. Its purpose is not to assess the performance of the people in those roles but to establish a principled, defensible basis for compensation, grade architecture, and organizational design.
How an organization values work directly shapes who it attracts, how it develops talent, and whether it retains the capability that strategy demands. A flawed evaluation system does not merely produce inequitable pay. It produces misaligned incentives, distorted career paths, and an organizational structure that rewards the wrong things.
Modern job evaluation emerged from the industrial era. Early methods including job ranking, job classification, and the Hay Guide Chart were designed for stable, task-defined roles in hierarchical organizations. The underlying assumption was simple: work is a collection of tasks, and tasks can be inventoried, compared, and scored.
For most of the twentieth century, this assumption held. Roles were relatively stable. Technology was a tool, not a collaborator. The knowledge required to do a job could be documented in a job description and compared across a consistent point-factor model.
Point-factor systems were an important advance. They introduced structure, comparability, and a degree of objectivity. But they were designed for a world that no longer exists. Their fundamental unit of analysis is the task, and that task has been partially or wholly automated in role after role. The question is no longer what the person does, but what only the person can do.
Legacy job evaluation frameworks have not simply aged poorly. They have been structurally undermined by five concurrent disruptions that no incremental revision can address. Understanding these forces is essential before any practitioner can make the case for a new methodology.
1. AI and Automation Convergence. Traditional point-factor systems measured task complexity, task variety, and the skill required to execute defined activities. Those activities are now increasingly performed by AI. When a machine executes the task, the task is no longer a valid proxy for human value. What the role requires is the judgment to direct, calibrate, and override the machine.
2. Fluid Role Boundaries. The job description, the foundational document of classical evaluation, assumes a stable, enumerable set of accountabilities. Modern roles are defined by portfolios of work that shift across quarters, projects, and business conditions. A job description captures what the role looked like when it was written, not what it demands today.
3. Distributed and Hybrid Work. Physical co-location and supervision were implicit proxies for accountability in legacy frameworks. The shift to distributed and hybrid work models has severed this link. Value must now be assessed through outcomes, influence, and organizational contribution, not presence.
4. Skills Over Tenure. Traditional systems rewarded accumulated experience and knowledge depth at a single point in time. In environments where professional knowledge has a shortening half-life, the ability to continuously acquire and apply new capabilities has become the more important variable.
5. Ethical and Governance Complexity. Roles today carry accountability for algorithmic outputs, data handling, regulatory compliance, and ESG commitments. These dimensions of role worth are structurally invisible to legacy frameworks because they did not exist when those frameworks were designed.
Continuing with legacy frameworks does not merely produce inaccurate grades. It actively misaligns compensation with contribution, signals to talent what the organization values, and creates internal equity problems that surface acutely during AI transformation programs and M&A integration.
NOVA, the Navigational Organizational Value Assessment, is a 78-point, six-factor job evaluation methodology developed through consulting practice across industries and geographies. It was designed from first principles to address the structural failures of legacy frameworks documented in Module 2.
NOVA evaluates what roles produce and influence, not merely what they do. Four of its six factors are evolved versions of classical evaluation dimensions; two are genuinely new, addressing dimensions of role value that legacy frameworks were never designed to capture.
Each NOVA factor is scored on a thirteen-point scale, with odd-numbered anchors providing defined behavioral descriptors. Scores are weighted by role family and aggregated to a total of 78 points. The NOVA JE Tool at kannova.ai administers and records all evaluations with an auditable factor-level justification log.
Factor 4, Human-AI Orchestration, is the factor that distinguishes NOVA from every legacy framework. It evaluates not whether a role uses technology, but the degree to which the role must judge when to trust, when to interrogate, and when to override an intelligent system. As AI adoption deepens, this factor will increasingly become the primary differentiator of senior role value across industries.
Understanding the NOVA Framework is the first step. Applying it to organizational decisions, including grade architecture, talent development, total rewards, and M&A integration, is where practitioners create real value.
A NOVA evaluation produces a weighted composite score for every evaluated role. These scores, mapped across the organization, become the diagnostic instrument for grade architecture. A practitioner conducting a NOVA evaluation plots the score distribution, identifies natural clusters, and allows grade boundaries to emerge from the data rather than imposing a pre-determined number of levels.
NOVA grades serve as the foundation for salary banding. The critical difference from legacy practice is that NOVA grades reflect contemporary contribution rather than historical task accumulation. A NOVA-anchored compensation structure rewards cognitive complexity, AI orchestration capability, and governance accountability, not title seniority or years in role.
The factor-level scores generated by a NOVA evaluation produce a contribution profile for every role. These profiles replace the grade label as the unit of career development. The question shifts from "What grade is this person?" to "Which factors define this role's contribution, and what development would move those factors?"
M&A practitioners face a specific version of the evaluation problem: two organizations with different frameworks, different grade nomenclatures, and different compensation philosophies must be harmonized rapidly. NOVA provides a common evaluation language that surfaces true structural equivalences, enabling defensible harmonization decisions that legacy cross-referencing cannot achieve.
A framework is only as effective as its implementation. NOVA's design incorporates a governance architecture that ensures evaluation decisions are transparent, auditable, and resistant to informal adjustment over time.
NOVA implementation follows four phases: diagnostic (mapping current grade architecture and identifying structural gaps), calibration (applying NOVA factors to a representative role sample to establish scoring anchors), evaluation (systematically evaluating the full role population through structured dialogue in the NOVA JE Tool), and integration (embedding NOVA grades into compensation banding, career frameworks, and talent processes).
Unlike legacy frameworks that produce a single definitive evaluation, NOVA is designed for continuous calibration. Factor weights can be adjusted annually as the organization's context evolves. An organization undergoing rapid AI adoption, for example, may increase the weight assigned to Factor 4 to reflect the growing premium on Human-AI Orchestration capability.
Every score in the NOVA JE Tool requires a written justification. The tool does not permit informal adjustment without an audit trail. Cross-functional calibration sessions are a required governance step before evaluation outcomes are finalized. This architecture ensures that NOVA operates as an evaluation system and maintains its integrity across organizational cycles.
15 questions · Pass mark 70% · Select the best answer for each question
Join the Community
Proprietary research, case studies, and analytical frameworks from KaN's consulting practice, delivered to members before publication.
Connect with HR and compensation professionals applying NOVA across industries and contribute to its ongoing calibration and development.
Direct access to Neelima Kaushik and Kaushik Srinivasan through member webinars, structured Q&A, and advisory roundtables.
Community members receive priority access to new NOVA Portal features, sector calibrations, and benchmark data as they become available.
Moderated peer discussions on grade architecture, AI readiness assessments, M&A harmonization, and practical NOVA implementation challenges.
Reserved for practitioners in HR, compensation, organizational design, and related advisory disciplines. No cost to join.
You will receive a confirmation shortly. We look forward to having you as part of the Worth of Work practitioner network.
Table of Contents
Get the Book
Available in print and digital formats. Published by KAN Collective, India, 2026.
ISBN 978-93-5779-730-6
Shipping Worldwide
Join Us