Yeah, but that instability was not entirely Win95's fault.
Back then computers had almost no resources. NT had a "proper", academically correct OS design with a microkernel architecture (until NT4). It paid for it dearly: resource consumption was nearly double that of Chicago. Additionally, app and hardware compatibility was crap. Many, many apps, devices and especially games would not run on Windows NT. Microsoft spent the next 6-7 years trying to make NT acceptable to the consumer market and only achieved it starting with Windows XP.
So Win95 was hobbled by the need for DOS and Win3.1 compatibility, but that is why it was such a huge commercial success.
Making things worse, tools for writing reliable software were crap back then. Most software was written in C or C++ except often without any kind of STL. Static analysis was piss poor to non-existent. If you wanted garbage collection, Visual Basic was all you had (actually it used reference counting). Unit testing existed as a concept but was barely known: it was extremely common for programs to have no unit tests at all, and testing frameworks like JUnit also didn't exist. Drivers were routinely written by hardware engineers who only had a basic grasp of software engineering, so they were frequently very buggy. Hardware itself was often quite unreliable. Computers didn't have the same kinds of reliability technologies they have today.
Most importantly nobody had the internet, so apps couldn't report crash dumps back to the developers, so most developers never heard about their app crashes and had no way to fix them except by doing exhaustive, human based testing. Basically that's what distinguished stable software from unstable software: how much money you paid to professional software testers.
Everyone who used computers back then remembers the "save every few minutes" advice being drilled into people's heads. And it was needed, but that wasn't entirely Microsoft's fault. It was just that computing sucked back then, even more than it does today