If you include the fact that you never bought it, that's more information that affects the probabilities. It's just like in the Monty Hall problem where revealing a goat behind one door changes the probabilities of what's behind the other doors.
Given the fact that cannabis was recently made legal where you live, you may be 5% less like to pass a class. Given the additional fact that you chose not to use cannabis, you may be 5% more likely to pass a class due to the curve being lowered by those who do smoke.
I thought we already knew the academic impact of canibus use from the documentary Fast Times at Ridgemont High
I thought the characters drove their cars to school in that movie.
When most AI people are talking about artificial intelligence, they are talking about narrow "intelligence". This is why in Russell & Norvig's book they quickly move away from the term "intelligence" and instead speak of "agents" working in a particular "task environment", and whether the agents behave rationally or not. For example, a chess program may be able to win chess games against a grandmaster chess player, so we say this agent is performing rationally within this specific task environment. The chess program is not "intelligent" in the sense that you and I are -- it's an incredibly dumb automaton, as is nearly every computer program. You can see this when it fails miserably when put in any different task environment.
The intelligence that will bring about the singularity is artificial general intelligence, which is the same intelligence that you and I have, that is, the capability of performing well in a very wide variety of environments. This type of agent would be able to reason about how to improve itself and bring about that improvement. Very little AI research these days involves artificial general intelligence, and the progress in this area is slow.
It is easier to write an incorrect program than understand a correct one.