Anthony Repetto
1 min readAug 11, 2021

--

Oh, I wasn't imagining anything fancy - just the usual basis vectors of the space, and rotations around them. Rotating around each of the bases of the latent space converts an initial vector into a set of others; each is easy to revert, because you follow the basis. So, each quadrant of the latent space can encode a different, overlapping *instance* of an object. (Initial vectors would be confined to the all-positive-valued quadrant, with each other quadrant taking an entire copy of the set of objects.) A toy example: No rotation => object is a *current* observation; x-axis rotation => object is a *goal* condition; y-axis rotation => object is a *constraint* ... The only hope I had was for a process similar to what the neuroscientists had found, in the article I linked. Thank you for noticing my stray thought! I expect you and others could find a better, actual implementation of Libby & Buschman's findings, if they might help. :)

--

--

Anthony Repetto
Anthony Repetto

No responses yet