Hello! I apologize for my delayed response — busted computer for the majority of quarantine!

I am only a mathematician, thinking in high-dimensional manifolds; I am so error-prone as a coder that I let specialization and trade do the work. If you are a programmer, and would like to implement this, please do! It would be a shame if the only things that became software were those invented by programmers.

The function itself is simply two straight lines which meet at some midpoint. The *height* of that midpoint is set by the *bias* on that neuron, B. The specific input value which triggers that midpoint’s level of activation can be called X; so, the coordinate of the midpoint is (X, B). Then, there is the function segment for input values which are *less than X*: f(a)=ma+b, as well as the line segment for values *greater than X:* g(a)=na+c. Both these lines must pass through the point (X, B). That is, we can solve for m, b, n, and c using B=mX+b and B=nX+c.

Yet, what we want are lines which depend ONLY upon the slope, and are guaranteed to meet at the midpoint (X, B). That is, we are applying the learning rate to m, n, in our equations, and solving for b, c, respectively. So, you are applying a gradient step to **each** component of the RefLu: the *bias B*, the *midpoint’s X*, and *both slopes*. Does that help?