It's a pretty important caveat that the RCTs reviewed included scientific, rather than strictly clinical definitions of ADHD. Having worked directly with and published on these types of datasets for multiple years, I can tell you with absolute certainty that those are very much not the same thing. Scientific definitions are more fast and loose because many studies don't have the resources to do a state-of-the-art ADHD evaluation, which can be tricky, time consuming etc. Similar to how, say, measures in scientific articles can have a greater degree of measurement error (i.e., lower reliability coefficients) than those used in clinical practice (ideally) or how RCTs for depression are reliant on the HAM-D. So these results are unsurprising to me.
There was an article in NYT awhile back about differing definitions of ADHD in science vs. clinical practice that was blasted by prominent ADHD researchers (including Barkley; he put out hours worth of video doing a take down of the article) for being overly sensational about the validity concerns with the diagnosis. I personally didn't find the article that horrible because there are a lot of unanswered questions about the diagnosis. For instance, it's highly correlated with many developmental problems, highly comorbid with many psychiatric problems, highly heterogeneous, can drastically fluctuate over time, and something like three symptoms are common in the population. I do believe that ADHD exists independent of our current culture, but like many phenomena in psychological science, it's very poorly measured.