

Tbh, that won’t be useful, like the guy above stated.
Google searches are very similar in terms of work that needs to be done. You could expect the average and the median to be very close. For example, take these numbers: 1,1,2,2,3. The median is 2, the average is 1.8.
AI requests vary wildly. GPT-5 for example uses multiple different internal models ranging from very small text-only models to huge, reasoning models and image generation models. While there’s no way to know how much energy they use without OpenAI publishing data, you can compare how long computation takes.
For a fast, simple text-only answer ChatGPT using GPT-5 takes a second or so to start writing and maybe 5 seconds to finish. To generate an image it might take a minute or two. And if you dump some code in there and tell it to make large adaptions to the code it can take 10+ minutes to generate that. That’s a factor of more than 100x. If most answers are done by the small text-only models, then the median will be in the 5 second range while the average might be closer to 100 seconds or so, so median and average diverge a lot.
From what I read, Microsoft is planning to kick anti-cheat out of the Windows kernel too, so maybe that will help on the Linux side as well.