We are all very familiar with the statement: dy = f'(x)dx - yes, that's "equal to".
However, there seems to be a very deep rooted misconception about what this actually means and how that meaning is based on the actual definitions of these "differentials". And there is a good reason why "dy by dx" behaves like a fraction (i.e. diff(y) "divided by" diff(x)) when, in reality, "dy by dx" is not a fraction.
In the "differential" definitions the variables dy and dx are NOT - yes you heard it right- NOT restricted to be infinitely small. dx can be as large as you like, and when you find the corresponding value of dy (also quite likely to be large) and then divide them ( dy "divided by" dx) - by the magic of similar triangles - you get the slope of the tangent. In fact, depending on f(x) and the value of dx, dy can actually be larger than "delta"y.
Now having said that, the definitions of differentials, do not prevent dx and dy from being infinitely small, but the key is that they are not required to be so. Newton knew this and it was key to his wonderful discovery.
As strange as this may sound to some, it is not fake news, but really is true.
BTW Newton used the term "moment of x". "differential of x" is a Leibniz term. But they are very similar. "Moment" is a "differential" that is infinitely small.
Cheers - Ian
Thanks for letting me know that dy and dx is not "always" small, but can be large too!