Jump to content

Prototyping a Human Dividend Licence for AI Automation

From JOHNWICK

Restrictive AI licences determine who will capture the gains from automation, and that single fact is about to shape the future far more than most people realise.

As AI automation accelerates, the terms under which it is licensed will decide whether AI’s productivity gains concentrate or circulate. And while the open-source path is a vital counterbalance to restrictive licensing, it will not be sufficient on its own. Open code is always weaker than the revenue-generating economies built on top of it, because those economic machineries turn code into value — and exclude people from that value just as easily.

We therefore need a new generation of licensing models designed to shape socio-economic outcomes at scale. Licensing remains one of the most powerful yet least explored levers for governing artificial intelligence. There is a clear need to prototype and evaluate impact-oriented models that embed social, environmental, and ethical considerations directly into the economic and operational logic of AI deployment.

Though this may not be immediately obvious, a licensing model does far more than define terms of access: it reshapes the flow of value, responsibility, and power across society. Each licence type, by setting the conditions under which technology may be used, shared, or monetised, generates its own pattern of social, environmental, and economic effects. As technologies spread under a given licence, those effects accumulate and scale. Open-source licences transformed global innovation dynamics; permissive data-sharing licences shifted privacy norms; copyfair licences created reciprocity-based sharing ecosystems; restrictive AI licences shape who benefits from automation. In this sense, every licence functions as a compound impact instrument, quietly reshaping the society.

A well-designed licence can influence socio-economic outcomes as much as — and independently from — the technical utility of the systems it governs. By defining how AI circulates, generates value, and interacts with public goods, licensing can steer development towards inclusion, sustainability, collective benefit, or fairer adaptation to automation. Licensing sits upstream of markets and policy: change the licence, and you alter the architecture within which economic relations unfold.

My own focus is on prototyping a value-redistribution licence. It would voluntarily commit the software owner to redirecting a small percentage of revenue from the sale of AI automation tools into a cash-back-to-all mechanism that returns a portion of software-generated income to every human being. For example, 5% of revenues — not profits. And, to begin with, this could be directed to a selected village below the poverty line, since it will take time before other AI developers follow suit and before coordination pathways emerge that can handle redistribution to all people on the planet. That is the design.

AI development is expensive and requires investment. Investment requires return, so AI licences will rarely be free; revenue must exist. Yet today, these revenues still live mostly in spreadsheets, which are continuously reshaped to persuade various kinds of potential investors. At this early stage, factoring in a column that committs 5% from projected revenue need not create pain. Sharing a prospective pie is far easier than carving contributions out of an already established income streams. The timing is therefore favourable.

A simple example shows how adoption could begin. A small start-up releasing an AI automation tool for customer support could licence version 1.0 under a Human Dividend Licence. The product is early, revenues are hypothetical, and the field is competitive. Some investors may question the move, but others will recognise the advantage: the company becomes the ethical default in a market where differentiation is thin. Customers gain both a service and a contribution to a global dividend; the start-up gains clarity of identity, narrative, and purpose; and the licence becomes part of the product’s architecture from day one. Why would anyone choose to offer their AI under such a licence? Because redistribution in the form of a universal dividend makes these systems the preferred option for customers. Human customers, that is. And in 2026, it will likely still be humans who make the customer choices.

Consumers would gain the means to favour AI services that voluntarily contribute to a global dividend. And we would gain a practical way to integrate increasingly powerful forms of artificial intelligence — from AGI to ASI — into society’s fabric in a cooperative and responsible manner. In doing so, we would establish a mutualistic relationship in which artificial intelligence generates value and human beings receive a material share of that value. That is a direction worth designing for.

Increasing automation can be a win-win, if its benefits are shared.

Read the full article here: https://emlenartowicz.medium.com/prototyping-a-human-dividend-licence-for-ai-automation-1b4ff1fa9860