Why are "32 bits" important?
In the "bad old days" of 8 bit processors, handling numbers larger than 255 required special registers, special addressing modes, and often manipulating several instructions to do anything.
255 isn't a very big number, I would note, and hence there would be 16 bit registers to refer to memory space, thus providing 65536 "pieces" of memory, each being an 8 bit byte.
The transition to 16 bits didn't substantially change things. Most registers got extended to 16 bits, but since you already had 16 bit address registers, addressing didn't really change.
And you were restricted to 65536 as the biggest value manipulable without having to do isomorphisms that would make programs complex and muchslower.
And 65536, while much bigger than 255, is still not a tremendously large dynamic range.
If you extend registers and memory addressing to 32 bits, the dynamic range extends to 0..4 billion. (Or 2B, signed.)
And 4 billion is a large enough number to be useful for a whole lot of purposes. You can fit rather large data structures into that quantity of memory. You can represent anything less than national-debt-sized values in that range.
And since you can manipulate that dynamic range directly, you can write short programs that manipulate fairly large values that run efficiently.
Moving from 32 to 64 bits gains you the ability to directly (e.g. - via single, fast assembly language instructions) process Bill Gates' net worth or national debts expressed in pesos or lira, which, amusements aside, really isn't that much of an improvement. The move from 16 bits to 32 was a big deal; 32 to 64 isn't.
And then there's the memory management side of things. With 32 bits to play with, it starts being worthwhile to add the transistors to handle Translation Lookaside Buffers and other sorts of memory virtualization mechanisms. It wasn't of value in 8 or 16 bits, as with CPUs that small and simple, an MMU would probably be more complex than the CPU itself. And the code to make use of the MMU might eat up most of the 64K that the CPU has to work with on a system with less than 32 bits to play with.
But with 32 bits to play with, it's worth the transistors and the RAM to add the robustness and flexibility. This is really the substantial improvement that comes out of the 32 bit transition.
Moving to larger sized words allows building machines that can address more memory and work with larger chunks of data at once; I would contend that while it makes some algorithms more efficient and surely can improve performance, the leap to 64 bits is not of as much importance to computing as the leap from 16 to 32 bits.
The above discussion directly deals with the progressions that have taken place associated with microprocessors.
The developments moving to 32 bits parallel what was already the case in large scale systems of yesteryear that tended to use 36 bits. (And the folks that hearken back to the 36 bit days are the ones that tend to be condescending to the Unix users, considering Unix systems not to be "real computers...")
They had large memory spaces, often segmented, which bears only minimal resemblance to the "64K Segment" approach Intel and Microsoft promoted on the x86 architecture.
Memory protection was introduced on 36 bit systems; only when microprocessors grew up to having 32 bits was it possible for them to support it.
The additional bits tended to get used to indicate things; on LISP-based systems, some bits could be used to carry data useful to assist in garbage collection, for instance. Multics did additional things that I don't really have enough background to fully grasp.
I think that it can probably be argued that 36 bit systems provided functionality that 32 and 64 bit microprocessors do not, even to this day, provide.