Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

Comment Re:Let them lease, but not screw with sales (Score 2) 225

It's in the definition of the word SALE.
If I buy something I OWN it. That means I get to do with it what I want, barring government restrictions. The shcmuck that sold it to me does not have the right to say "HEY! You can't DO THAT!"

They gave up that right when they sold it to me.

When I sell you a house, I can't then complain and say "Now wait a second, I may have sold you that house, but it's still mine and I don't like that new garage you are building!"

Correct. The real reason we see this is twofold - first, because of manufacturing and second, because of fraud.

The use of adhesives in assembly should be obvious - adhesives make for quicker assembly, and when you're making millions of widgets, screws get in the way.

Warranty fraud is a huge issue, and it's one thing a site like iFixit conveniently ignore. What happens here is a user may get curious and want to take a peek inside their device, so they try to open it. Usually things go well and they put it back together successfully, but sometimes they break it. Then they go and try to claim "it just stopped working".

And I say iFixit ignore it because first of all, the manufacturer will then have to implement countermeasures to protect against this. But it also means if a site offers repair services, then they need to protect themselves as well - imagine selling repair services and now you have to fix someone's curiosity. There is no sane resolution - it'll be the user/customer vs. the repair shop.

Proprietary screws also help prevent this as if a user is willing to buy a screwdriver from iFixit, they're probably skilled enough to actually fix it. But the vast majority of users are not able to do this. And iFixit doesn't serve the general public - just the few people that care.

And that's the real problem.

Comment My personal stack overflow experience (Score 3, Informative) 167

I've been posting/moderating slashdot for years, but just started with stack overflow. Here's my experience.

I definitely agree with and have seen what the articles are driving at. In particular, the "The Decline of Stack Overflow" is absolutely 100% on the money.

I answer questions. At least five / day. In a short time [about a month], I amassed 1000+ rep points. I'm now in the top 0.5% for the quarter. The article's comment about "SO hates new users" is true. Before I got to this point, I used to have more difficulty with certain people. As my point total got higher, the snark level went down. Ironic, because I was doing [trying to do] the best job I could at all times. My answers didn't change in terms of quality, just the tone of comments I got back.

When I post an answer, I take several approaches. Sometimes, a simple "use this function instead" is enough. Sometimes, "change this line from blah1 to blah2". If the OP has made an honest effort to be clear, but the posted code is way off the mark (e.g. has more than two bugs), I'll download it clean it up for style [so I can see OP's logic before I try to fix it], fix the bugs [usually simplifying the logic] and post the complete solution with an explanation of the changes and annotations in code comments.

This is the "cut-n-paste" solution. I may be just doing somebody's homework for them. But, usually, it's just somebody who spent days on the code and is "just stuck" [I've asked some OPs about this]. The controversy is that "if you do that, they'll never learn anything". Possibly. But, it's part of my judgement call on type of response to give. IMO, in addition to research and traditional classes/exercises, one of the best ways to learn is to read more advanced "expert" code. Compare one's original to what the expert did and ask "Why did they do it that way?!". This may foster more research on their part and they will have an "AHA! moment"

Unlike slashdot, one can edit a post [either a question or an answer] and you can delete them. Comments can edited for five minutes and deleted anytime. Now this will seem goofy: If you comment back and forth with a given user over an answer one of you gave, either a collegial discussion or a flame war, eventually an automatic message comes up asking if you'd like to transfer your "discussion" to a chat page. Also, because comments are limited to 500 chars, I sometimes have to post a partial, incomplete answer because what I need to say needs better formatting/highlighting than a comment and wouldn't fit in a comment, even though it's more appropriate as a comment.

The goofy thing is that you start with 1 rep point. You can post a question or a full answer. But, you can't yet post a comment!?

On SO, people edit their questions and answers, based on feedback in the comments. The answer may be edited several times before questioner accepts it. Sometimes, for complex questions, it can take a day or two to come up with the right answer.

Despite all this, once and a while, I get a "heckler" who doesn't like an answer [even though it's correct]. It goes several rounds in the comments, usually the other person doesn't understand the problem space enough to realize the answer was correct [or more subtle than they realized]. So, it goes back and forth, and each time I explain how I was correct, adding clarification or highlighting what I said originally. Eventually, the heckler says "Your answer doesn't answer the question". This is for an answer the OP questioner has "accepted" as the best one.

I've seen reasonable questions downvoted within minutes [I upvote them back]. I've seen people threaten to close the question as unclear, requires opinion, or can _not_ be answered as described. The last one is funny, because the question is clear to me, and I provide a correct answer [that eventually gets upvoted and/or accepted]. Sometimes I send the commenter who is threatening doom a message [you can direct a comment to a specific user--like twitter] and say "Hey! The question can be answered--as is. Please see my posted answer".

Because I have particular domain expertise, I tend to see some of the same people active on a question that I feel qualified to answer.

Some are superb angels:
- Always polite
- Extreme kindness to newbies [even if the newbie "doesn't get it"]
- Always provide a helpful and correct answer.
- Plus, a ton of helpful comments, even when not posting a full answer.
- Often, post a helpful comment, and come back with a full correct answer an hour later
- May post a comment about how an answer was wrong. They're usually right--I've had this happen to me once or twice

Some are what I'll call "keepers of the faith" or KOTFs:
- They will comment "consult man page" [without explaining _which_ manpage].
- Or [and this is popular], "please post an MCVE". An MCVE means [IIRC] "Minimally complete verifiable explanation" in SO jargon. This is even for questions that are already clear, concise, etc.
- Note some "angels" will say "do MCVE" but the difference is tone: angels do it with love--and question is unclear. KOTFs stay polite but it comes off as abusive
- The KOTF crowd camp on the SO moderation pages. They do direct moderation, but the pages operate mechanically more like slashdot's meta-moderation section.
- The problem is that KOTFs will moderate based on form rather than domain expertise (e.g. a bash programmer moderating a question involving python)
- They moderate question as "should be deleted" because the question didn't fit the MCVE requirement--or so they thought.
- Because they don't have the domain expertise, they're not qualified to judge whether the question is clear to a person who is "skilled in the art" of the particular domain (e.g. unclear to bash person, perfectly clear to python person)
- KOTFs also leap on the littlest missing char in OP's posted code, even when it's obvious that it's because SO's posting mechanism is to blame [SO uses markdown, and you have to indent four spaces before each line of code]. SO doesn't allow a direct uplink or clean paste like pastebin
- Also, downvoting a question costs the downvoter only 2 rep points, so KOTFs can (and, unfortunately do) downvote quickly and frequently. Shoot first and ask questions later.
- After downvoting, KOTFs are likely to start commenting on a page about MCVE, code is terrible, consult man page, why are you asking this.
- For some, in separate comments as they gradually think of new things to snipe/carp about [*]

[*] I saw this literally on a page that had the question upvoted to +5. Multiple commenters [myself included] were, using the comment system, helping OP to test/debug his code in real time (e.g. "try this", wait, "what result?", "okay, try this")--this is unusual, but not that unusual. We were "online" with OP for about two hours, before OP got so intimidated by the KOTF that he deleted the question page. Fortunately, OP had gotten enough hints from the angels, that he was able to find his problem, and he reposted his question next day with a completed answer.

Frankly, even when I'm not the target of this, it can be difficult to watch. If the KOTF is genuinely wrong, I'll sometimes comment back to them directly, because it's clear their primary mission is to make OP feel as bad as possible. Doing so takes time/energy on my part that is better spent answering a question, but otherwise, KOTFs can run [and do] run amok. With the "online" example, I finally saw the KOTF, composed a message telling him to back off, but the page was deleted 10 seconds before I could send it.

Still, overall, SO still works. But, it could use a facelift ... Ironically, younger programmers think that older ones can't learn new things. But, on SO it appears that the KOTF are younger programmers who started on SO early, amassing points over a multiyear period. But, because they've been there so long, they feel like "they know what's best" or know when a question [or answer] is "well formed" or not. Older programmers have come to the site more recently, so they're more circumspect. So, in this context, who's really the tired old man?!?!

Most OPs try to post a good question. Sometimes, they're newbies, and need help to formulate it. Ignoring KOTF comments, others will ask for more specific information: post this, and this, and this. With the commenter's help, OP can edit the question enough that they can get a good answer back. Usually, they're more than willing.

An expert answerer doesn't always [and frequently doesn't] need a perfect question to provide a good answer. So, if they're asking, it usually means that they or anybody else needs the info.

However, some OPs do take offence at being asked to post more information, even if it's needed. Sometimes, a further explanation by commenter as to why they need more info gets OP over this. Sometimes, OPs drop the question at this point. I can only surmise this is due to ego crush, even when it's an angel asking. Some OPs do have a sense of entitlement that if they post a question, it should be answered--quick. They're also the hardest to get adequate info from.

And, sometimes, it appears [to me, at least] that OP thinks the code they held back [even if not proprietary] is too "special" to "give away for free". They never say this outright, but, after several rounds of multiple commenters asking for additional info, that is not provided, what other conclusion does one draw? This is more likely to happen with newbie programmers or newbie SO posters (e.g. "I'm working on my first singly linked list implementation, but mine is going to be special and revolutionary for the world").

So, I will continue to post answers. Yes, I do "work" for "points". But, I also, in addition to getting my answer "accepted" [worth points], I frequently get a comment from OP: "Thank you! Everything is now so clear". And, that, as much as anything else, is the reason to do it.

It's what allows me to continue through the minefield of SO's version of "The Good, The Bad, and The Ugly" ...

Comment Re:What changes? (Score 1) 48

What changes is very little. If a telemarketer is honest he'll probably be playing by the rules already - however these people are scammers. They're not going to suddenly start changing the way they operate because the FTC said "stop, or we'll say stop again!". It's not like most of the marks these people are going after will even be aware of such changes.,

Well, you can easily run a PSA on "If someone asks you for money via Western Union/etc., it isn't legit, so just hang up".

And that's likely the point.Because those methods of payment are less trackable and the authorities are helpless to deal with victims because of it. But if you use a credit card, all of a sudden a lot more of the transaction is trackable.

So first, the marks have one more tool to help identify a scam, and should the scammers try to use a legitimate service, the tracking is a lot better.

Comment Re: Introduction (Score 4, Interesting) 207

I think you're underestimating the marketing opportunity of a recall. They're just going to put a wrench on the bolt, that costs nothing. Yeah, some minimal labor costs. BUT...who goes through the pain of taking their car to a dealership without getting everything else it needs serviced? Or just buying a whole new car, which isn't uncommon, especially if someone can afford the 80+k to buy one in the first place.

Something tells me Tesla will come ahead on this one.

Well, Tesla is quite different - you can buy an annual $600/year service plan that covers everything except tires, and for a bit more, you can have it that Tesla will come to you to service it.

The thing is, an ICE takes a lot of maintenance - between stuff like engine oil and other fluids, there's a bit of tuning to keep things in shape. An EV is different - there's actually very little in the power train that requires regular servicing - so much so that users may go for years between tune-ups (Tesla recommends users come in at least once a year to get service and replace consumables like brakes). Most ICE service schedules range from every 3 months to every 6 months.

And yes, Tesla will probably come out ahead - I mean, look at the other recalls out there - between Toyota's sudden acceleration, GM's ignition switch and many others, either the company didn't act until forced to, or they still don't act, even when there are multiple deaths attributed to the flaw.

So they get a lot of PR over it - "we're recalling every Tesla S to make sure the seatbelts are bolted on correctly, even though there was only one failure and everyone lived, and the government isn't making us do it, but we will because it's the right thing to do."

Comment Re:The life of a test pilot ... oh wait. (Score 1) 96

I was gonna say, "well, seeing what happens when you go too fast is part of a test pilot / driver's job", until the article mentioned bringing kids along. Ugh, that's reprehensible.

Well it depends, because part of the TGV tests in the final phase is public demonstration. It happened in Japan - their newest bullet train was running on test tracks, yet many people lined up to buy tickets to be the first to ride it (it can go over 500kph) because while it will take many years to build or upgrade the tracks to support the new trains, this is an opportunity see the future now.

So just because it's a test train on a test track doesn't mean it was being tested. Likely it was a public demonstration showing the future capabilities of the TGV that will take 20+ years to fully bear out. And the public loves this sort of thing - to be the first to see the future of the trains and to ride them. And likely the test demonstration is something well within the envelope of safe - you're not testing anything, but showing it off.

Of course, what it really shows is that humans are human and stupid mistakes still happen. And the new trains still lack proper warning equipment when autobrakes and speed limits are disabled or exceeded.

Comment Re:GM producers are shooting themselves in the foo (Score 1) 513

. Does the food that you purchase identify the conglomerate which entirely owns the folksy subsidiary whos name appears on the product?

That's because they're not required to. I presume most of the population would be shocked to find out that 99% of the stuff they buy at the supermarket comes from approximately 12 companies. All of them recognizable.

Companies behind the brands.

(That image was created as part of an Oxfam report, Behind the Brands).

Comment Re:Pure hype (Score 1) 67

It's a great story, but her invention was never used. It's really a huge stretch it really relates at all to current spread-spectrum technology. Even if you think it is related, spread-spectrum as developed did not base their ideas on her invention, and it's unlikely they were even aware of it.


Hedy Lamarr's spread spectrum, called Frequency Hopping Spread Spectrum (FHSS) is used in Bluetooth and early WiFi (802.11, no "b") is the first known implementation. Basically, Hedy based it on a piano roll used by player pianos in order for the Navy to control torpedoes without them being jammed.

The other form of spread spectrum, Direct Sequence Spread Spectrum (DSSS) is much more modern - its patents are owned by our dear friend Qualcomm, who used it to create CDMA as an alternative to FDMA and TDMA mechanisms.

Qualcomm owns DSSS because they basically invented it, and they're quite recent.

FHSS is still spread spectrum, and it still accomplishes the goal of spreading the signal out and interference on a channel only affects the signal for a little while.

FHSS is easier to do if your communications are channelized and you can switch between channels easily (which is why it's older). DSSS requires more computation and initial acquisition of the stream is a lot harder since you're not entirely sure where in the chip code you are so you not only have to pick the right PRNG seed, but you need to advance the code until your correllator starts detecting a signal.

(The chip codes are carefully selected so the correllator only produces a noise output if the chip code is wrong).

Comment Re:Initial Thought (Score 5, Informative) 85

My initial thought was that if Math can be performed that produces the same results Encrypted vs Unencrypted, is that it isn't very well encrypted. My understanding is that the better encryption techniques approaches what looks like static (randomness).

It's strong. Very strong.

Problem is, there's a tradeoff in time/speed and operations you can do. There are general algorithms that let you do a wide variety of operations, but they are very slow - on the order of a million times slower than unencrypted.

Faster algorithms usually restrict the operations you can do. on the data, and performance is almost equal that of unencrypted.

Note that you don't simply say "I want to add these two numbers" , encrypt them, then just do a simple add - no, the operation after encryption may be a multiplication, or other operation.

And this is actually very useful - because it lets you store critical data in the cloud, and perform manipulations of that data in the cloud, without the cloud provider having to have the encryption key. If the data is stolen, the hacker gets encrypted garbage.

So the current operation is database - you put up an encrypted data in the cloud, and the cloud provider runs an encrypted database service. You can perform limited queries, and the cloud provider will return you the encrypted rows as encrypted blobs to you. You use the key (kept onsite for security), and marvel that you just did a transaction in the cloud, the cloud provider executed the operation, and you got back the rows that you wanted, and at no time other than on your PC was it ever in plaintext.

You could be more fancy - say you want to add up a column - you tell the database server to add it up (encrypted), and the final result is sent back, as encrypted data. You use your key and get your answer.

That's the primary use case for this sort of encryption. Do it right and even in house database can be completely encrypted. So stuff like health information and banking records will never be in plain text until you need it so breaches won't be as harmful.

Comment Re:Money (Score 1) 337

my thoughts exactly. Apple is all about avoiding product cannibalization. Thus the super price tag and performance disparities on the Mac Pro vs iMacs, iPad/Mini vs iPad Pro, MacBook Air vs MacBook pro 13, MacBook Pro 13 vs MacBook Pro 15, and even MacBook Pro 15 vs MacBook Pro 15 with dedicated graphics. The only notable exception is the iPhones standard and plus, but hey, they still do the price disparity on the amazing price differences for storage capacity on those.

Uh, no. Apple cannibalizes themselves a lot.

I mean, iPods are pretty much dead - killed by the iPhone and the like. Apple saw that coming and didn't hang onto the iPod. The only reason Apple even sells iPods is because there's still a few people who buy them, but the amount of time that goes between updates shows it's not Apple's priority to waste development time and money updating them regularly.

And the iPad pro pricing is enough to eat into MacBook sales. Even the high end iPhones cost as much as iPads.

And Apple knows it happens - it's why the iPad Air 2 is still around and there's no iPad Air 3 out.

There are no sacred horses for Apple - if the iPad pro is the way to go, they'll develop it and let the low-end Macbooks rot.

And the price tag for Macs is deliberate - they're not wanting to enter the spiral of race to the bottom. And you could argue the PC industry went that way and headed back - because everyone did an Apple and started releasing decent laptops again at higher (Apple-like) price points. Apple didn't follow the crowd to sub-$500 laptops, while the PC industry slavishly eeked every dollar out of it. So much so Intel had to spend millions of dollars convincing manufacturers to release higher end products to compete with the MacBook Air and that higher margins are worth it.

Comment Re:Too many self-absorbed people (Score 1) 119

It's one of the reasons that the smartphone is blamed for making people stupid.

No, there are just as many stupid or smart people around as before. The unfortunate part is that things like computers and smartphones now put technology in the hands of the stupid, so now we have to listen to them bitch, whine, and say stupid things.

Well, wasn't one of the primary goals of the internet is to make everyone a publisher?

Of course, I'm sure we HOPED people would use the communications ability of the Internet for good-for-humanity reasons like rooting out censorship or oppression, but in the end, we forgot it's really a great publishing medium for the idiotic to post stupid stuff.

So yeah, we opened the ability for the masses to communicate. It's just the masses don't really care about "good for humanity" and really just care that their Amazon package arrived 2 hours late, or their food wasn't hot enough or other stupid crap. Heck, companies have to attend to every little triviality ("first world problems") now instead of being able to take care of the more important issues. Like instead of being able to devote resources to handling the customer that has a legitimate complaint, now they have to handle that customer amongst the 1000 others who are complaining they were short changed a penny or some crap like that.

Comment Re:I just want to charge at the current specs (Score 2) 75

Part right. The spec allows the delivery of max 1.5A by detection of shorted D+/D- lines. (the iDevices don't comply with this). The spec does not allow at all for measuring resistances despite the fact that Sony, Apple, Samsung, etc all implement this in their chargers. But quite critically at least some of them (Samsung) correctly implement enumeration of the charger to determine the maximum current draw, as per the standard.

And the reason is, guess what? Shorting D+/D- says UP TO 1.5A.

Which means really, you can't draw that much anyways because if the user plugs you into a device that only provides 500mA, guess what? You can only draw that.

Which is why I don't see why the USB folks didn't take a page from Apple and use their spec, because shoring D+/D- says absolutely nothing about how much current you can draw. And I've seen rather ... explosive ... results from devices that tried to draw 1.5A from a charger incapable of doing so.

At one point, it was 500mA. At another point, it was 800mA. Now it's 1.5A.

As a spec, it sucks - it means I can't tell how much current I can draw. And there's way too many made-in-china crap with a USB port that makes it risky to assume you can draw 1.5A from them. If you say the user must use the same charger with the device, that eliminates the whole reason to standardize.

At least the Apple spec tells you electrically. And there are many devices where it says "2A" on the plate but the resistors say 500mA.

Comment Re: 20 cores DOES matter (Score 1) 167

If we're talking about bulk builds, for any language, there is going to be a huge amount of locality of reference that matches well against caches. shared text RO, lots of shared files RO, stack use is localized (RW), process data is relatively localized (RW), and file writeouts are independent. Plus any decent scheduler will recognize the batch-like nature of the compile jobs and use relatively large switch ticks. For a bulk build the scheduler doesn't have to be very smart, it just needs to avoid moving processes around between cpus excessively so and be somewhat HW cache aware.

Data and stack will be different, but one nice thing about bulk builds is that there is a huge amount of sharing of the text (code) space. Here's an example of a bulk build relatively early in its cycle (so the C++ compiles aren't eating 1GB each like they do later in the cycle when the larger packages are being built):


Notice that nothing is blocked on storage accesses. The processes are either in a pure run state or are waiting for a child process to exit.

I've never come close to maxing out the memory BW on an Intel system, at least not with bulk builds. I have maxed out the memory BW on opteron systems but even there one still gets an incremental improvement with more cores.

The real bottleneck for something like the above is not the scheduler or the pegged cpus. The real bottleneck is the operating system which is having to deal with hundreds of fork/exec/run/exit sequences per second and often more than a million VM faults per second (across the whole system)... almost all on shared resources BTW, so it isn't an easy nut for the kernel to crack (think of what it means to the kernel to fork/exec/run/exit something like /bin/sh hundreds of times per second across many cpus all at the same time).

Another big issue for the kernel, for concurrent compiles, is the massive number of shared namecache resources which are getting hit all at once, particularly negative cache hits for files which don't exist (think about compiler include path searches).

These issues tend to trump basic memory BW issues. Memory bandwidth can become an issue, but it will mainly be with jobs which are more memory-centric (access memory more and do less processing / execute fewer instructions per memory access due to the nature of the job). Bulk compiles do not fit into that category.


Comment Re:Does this really change anything? (Score 1) 85

... isn't it still likely that the easiest way for manufacturers to comply will be total lockdown?...

Well, then it will be the manufacturers to blame, not the FCC.

Most likely what will happen is the chipset manufacturers will build in a set of OTP fuses into the chipset (which already exists for stuff like MAC addresses) that set the regulatory domain. The WiFi firmware reads the fuses and locks out the frequencies it's not supposed to transmit on.

Existing hardware already has it, and really only the firmware has to change.

Comment Re:The manufacturer... (Score 1) 100

While I'll grant the manufacturer isn't likely to DELIBERATELY infect things, my first assumption is that the manufacturer simply has terrible security and the worm made it into the master image for all their devices.

In the complex world of manufacturing, there's several "manufacturers". There's the manufacturer - the guy who puts his name on the box and does all the marketing and selling. There's the design manufacturer who designed the hardware, and then the contract manufacturer who actually builds the thing, tests it, packages it up and ships it.

Most likely, there is no "master image" - it's when the contract manufacturer goes and tests the hardware, the PC they use was infected, and subsequently gets the USB disk infected. After all, the general PC hygiene is pretty poor - if you need a PC to test, you provide the software and environment and instructions on what to do. (Sometimes, if there's special hardware and software, you may provide a PC).

Internet access is pretty poor, so unless you want to pay for the CM's time you want it inhouse as much as possible.

Comment Re: 20 cores DOES matter (Score 4, Informative) 167

Urm. And you've investigated this and found that your drive is pegged because? Of What? Or you haven't investigated this and you have no idea why your drive is pegged. I'll take a guess... you are running out of memory and the disk activity you see is heavy paging.

Let me rephrase... we do bulk builds with pourdriere of 20,000 applications. It takes a bit less than two days. We set the parallelism to roughly 2x the number of cpu threads available. There are usually several hundred processes active in various states at any given moment. The cpu load is pegged. Disk activity is zero for most of the time.

If I do something less strenuous, like a buildworld or buildkernel, almost the same result. Cpu is mostly pegged, disk activity is zero for the roughly 30 minutes the buildworld takes. However, smaller builds such as a buildworld or buildkernel, or a linux kernel build, regardless of the -j concurrency you specify, will certainly have bottlenecks in the build subsystem that have nothing to do with the cpu. A little work on the Makefiles will solve that problem. In our case there are always two or three ridiculously huge source files in the GCC build that the Make has to wait for before it can proceed with the link pass. Similarly with a kernel build there is a make depend step at the beginning which is not parallelized and the final link at the end which cannot be parallelized which actually take most of the time. Compiling the sources in the middle finishes in a flash.

But your problem sounds a bit different... kinda sounds like you are running yourself out of memory. Parallel builds can run machines out of memory if the dev specifies more concurrency than his memory can handle. For example, when building packages there are many C++ source files which #include the kitchen sink and wind up with process run sizes north of 1GB. If someone only has 8GB of ram and tries a -j 8 build under those circumstances, that person will run out of memory and start to page heavily.

So its a good idea to look at the footprint of the individual processes you are trying to parallelize, too.

Memory is cheap these days. Buy more. Even those tiny little BRIX one can get these days can hold 32G of ram. For a decent concurrent build on a decent cpu you want 8GB minimum, 16GB is better, or more.


"The most important thing in a man is not what he knows, but what he is." -- Narciso Yepes