Everyone must have a different story, here's a bit of mine: In 1960 I went off to college, and had a chance to register for the first undergraduate course in programming that had ever been offered there. Computers were not a big part of everyday life then, and the idea of programming was very exotic and enticing to a young lad. I dove right in with only the fuzziest notion of what I was getting into. I was really really lucky to have Alan J Perlis (later the first recipient of the Turing Award) as my instructor for that class. He was simply amazing, and the most friendly teacher I had that freshman year.
We started learning to program in assembly language, and did a fair number of puzzle-solving problems, many having to do with moving pieces on a chess board. That really makes one have to think about data structures and abstractions of physical reality without beating you up. Also, we had all-time access to the computer (IBM 650) and wound up spending a lot of evenings feeding cards into its maw. That class showed me what I wanted to do with my career. I wanted to make computers be able to do things that had not been done before.
The undergraduate programming class went on for two semesters. After that everything I learned was self-taught, and sometimes quite a struggle as techniques and technologies continued to evolve.
In those days, computer science and programming were really new, and had not found their place in academia quite yet. I was mostly advised to continue studing a more traditional subject, though I really wanted to pursue programming. I finished my student career with a PhD in theoretical chemistry, but it was really a sneaky way to do more programming and play with computers.
I am still of the opinion that coding should be learnt first from assembly language level, and then get progressively more abstract. When you are coding at the machine instruction level, you really have to learn what the machine can and cannot do, and I think that informs programming for the rest of your career.