It's hard to credit the behavioural science claim.
Since we already know how to social engineer our way into secure areas, secure building (including nuclear and military facilities), and to get people to give their passwords or reset someone else's password, and even get the police to respond with deadly force to a perceived threat by an otherwise innocent third party (e.g. SWATting), and get them to click on crap they shouldn't click on in emails, and get them to insteall "media player updates" that aren't, anti-mallware that's actually malware, and so on...
How is additional funding for behavioural science in this area going to make us any more secure by making us even more aware of the exploits we already know, such as those being used by Mitnick prior to 1995 to get into the phone company?
We already understand the human behaviour which allows these attacks to work -- and so does Microsoft, and they're not really spending any effort fixing their software over this knowledge.
So how *exactly* will additional spending in this area impact cybersecurity again? Will it make anyone less likely to believe someone pretending to be from the IT department? Will it make someone less likely to let you on the premises when you pretend you want to talk to the property manager "or someone else in charge" about purchasing land adjacent to an otherwise secure facility?
I kind of don't think so.
But... BOOGA! BOOGA! Cybersecurity! Cyberwarfare! Fund us, fund us!