Er, yes - back prop is only one way. I mentioned decision trees, in the article - and since then, Bengio has shown us Equilibrium Prop. You could TL;DR my article as "Back prop, *like any method which relies on these same constraints* , will fail to generalize and suppress or integrate competing signals appropriately. We will need to *design a new method* that can compensate for these particular limitations."
AND - lo and behold, only *one month* after I wrote this article (waaay back in 2017) Geoff Hinton published his "Capsule Networks" - which I IMMEDIATELY wrote-about in follow-up articles on this site, Medium.com ... in the three years since, I have detailed numerous potential methods for achieving these goals AND others, by giving-up on back propagation, or various other sacred cows.
It usually helps to check when the article was written, with A.I. - you noticed one of my earliest articles on the topic, back BEFORE most people were looking at alternatives to b.prop... I even pointed-out certain advantages of new activation functions, in my article "an Alternative to ReLu" (from two years ago) that are shared in the best current alternative activation function for image processing: SIRENs.
So... yeah, "the problem exists no matter you use back propagation or not." That's true, for *all the simple, PRE-existing alternatives.* Which is specifically why I've described what those problems are; to *DESIGN an alternative which DOESN'T have that problem*. Does that still seem off-topic?