Discussion about this post

User's avatar
Hwei Yi Lee's avatar

What worries me even more than whether the human is operationally "in-the-loop" is that the loop of execution might not even be continuous. For example, once the marketing campaign is designed, approved, and set live, let's assume it is successful. It needs to run for a while (several weeks, a month, maybe longer) to become memorable to consumers and bring in value, before trends, seasonality, or both, shift and there comes the need for a reset.

That means humans bring in value at discontinuous spurts of time, and it's helpful to leverage the functional and institutional expertise of the same person to execute multiple refreshes and iterations of a product, a marketing story, a line of business, etc over a longitudinal period of time, except they may not be busy for the same number of hours every single day or week. How do we adjust human rewards to be fair to their continuing revenue contributions, while also being respectful of companies' marginal costs? I feel that a consultancy or contract model may emerge (and is emerging) to take over from full time employment in a number of knowledge fields where expertise is needed, but the need may become discontinuous especially with AI. Hopefully, this will evolve in a way that is fair to workers.

Victor KP's avatar

The chain framing is the right one, but the real bottleneck isn’t reliability in the Ford sense, it’s judgment depth at the design stage. What I see while implementing AI in HR, for example, is that chaining tasks amplifies whatever domain understanding exists at the point of design, including the gaps. The junior who never built a workforce report from scratch can’t judge whether the chained output makes sense. We’re not at the Singer fitter stage… we’re at the stage where we’re deciding who gets to be the fitter, and most organizations are getting that wrong.

1 more comment...

No posts

Ready for more?