Comment Re:Resolution (Score 1) 397
You don't have to provide all the intermediate sizes in an
Also, aren't OS X icons bitmap-only as well? My impression was that only Linux DEs have embraced vector graphics for everything.
You don't have to provide all the intermediate sizes in an
Also, aren't OS X icons bitmap-only as well? My impression was that only Linux DEs have embraced vector graphics for everything.
It is strictly application-dependent. If app declares itself as high-DPI-aware (which they have to do explicitly in the app manifest), then it's expected to properly handle DPI by scaling everything appropriately. Some frameworks do it automatically - for example, WPF. Others do not, but people declare their apps as high-DPI-aware anyway because they don't understand what it actually implies.
Apple does something similar here, but their innovation was that instead of resorting to fractional scaling on non-aware applications they do integer scaling, which is far cleaner in practice.
It should be noted that this is a strict subset of fractional scaling that Windows has. If you set scaling to 200% in the latter, then you'll get the same integer scaling for non-DPI-aware apps.
Note that this is not a part of the language itself (a comment is a comment). You can use Doxygen with C# if you really want.
Though I agree that the "standard" syntax for doc comments could be thought out better. On the other hand, the nice thing about XML is that it's an "Extensible" Markup Language. Which is to say, you can use any custom elements that you want in your comments, and the compiler will extract them into the
This comment might explain what exactly C# async/await does that Java futures do not (and why it is a language feature, and not just a library thing).
Some were, some weren't. E.g., how did delegates make it easier to "write code that only ran on Windows"?
JNI is *NOT* platform dependent. That is exactly the point of it.
JNI is a platform-independent framework to access platform-specific native APIs. So same end result.
No it was not, it only showed how ridiculous the windows API at that time was, as AWT was nothing more than a wrapper to it, and intended only to be a wrapper around it, nothing more.
If AWT was "nothing more than a wrapper around Windows API", then how exactly did it work on Solaris and Linux.
Thing is, AWT was a wrapper over the lowest common denominator of Win32 and Motif. That it sucked was inevitable by design.
Then came Swing, and then finally MS got it and crafted something like Windows.Forms.
If you look at Windows Forms, it's essentially WFC (the Win32-specific UI framework offered in J++ as a better alternative to AWT) developed further and rewritten in C#.
And Swing? It still sucks today, but back then, in 1997 and even in 2001, it was a horribly bloated framework that had visible UI lag on all platforms. To add insult to injury, it also looked like crap on all those platforms.
You can create your own language and API, but you can't call it Java if it doesn't meet the standard. How difficult is this to understand?
Apparently it is, since this is exactly what GP said happened. The "can't call it Java" thingy that ended up being released is
Async is (or was) a pretty standard feature in some languages and run time systems and operating systems. Usually it's just a library or operating system you can use, but ya, that's not integrated into the language itself. But consider Ada with asynchronous transfer of control. Erlang has asynchronous stuff, but maybe that looks too much like message passing. Certainly Smalltalk allowed you to have blocks executed asynchronously. Though it all depends upon what your definitions are, it's too easy to say that it's just boring old message passing and it's not really in the language unless there's some special syntax for it.
Async in the context of C# usually refers to async/await. If you're familiar with PL concepts, the concise way to describe it is that it takes a sequential algorithm, and rewrites it in continuation-passing style for you, with continuation boundaries explicitly determined by where you put 'await" in your code. E.g. consider this:
s = file.ReadLine();
Console.WriteLine(s);
s = file.ReadLine();
Console.WriteLine(s);
This code is just reading two lines synchronously, blocking the thread between two reads waiting on I/O to complete. If you wanted to avoid blocking the thread, you can write that with a chain of callbacks, much like what you see in Node.js etc:
file.ReadLineAsync().ContinueWith(t1 =>
Console.WriteLine(t1.Result)
).ContinueWith(t2 =>
Console.WriteLine(t2.Result)
);
This works, but it gets tedious when you have long chains, and even more tedious when you have nested chains. Worse yet, rewriting loops in this manner is not trivial. For example, simple synchronous code like this:
while ((s = file.ReadLine()) != null) {
Console.WriteLine(s);
}
becomes this mess if redone async callback style:
Action<Task<string>> body = t => {
string s = t.Result;
if (s != null) {
Console.WriteLine(t.Result);
file.ReadLineAsync().ContinueWith(body);
}
};
file.ReadLineAsync().ContinueWith(body);
On the other hand, with async/await, the first example becomes:
s = await file.ReadLineAsync();
Console.WriteLine(s);
s = await file.ReadLineAsync();
Console.WriteLine(s);
and the second one is:
while ((s = await file.ReadLineAsync()) != null) {
Console.WriteLine(s);
}
Note how the only difference between async and sync code here is the addition of "await", and the use of ReadLineAsync instead of ReadLine. Compiler transforms all that to code with callbacks, automatically inserting one in every place where "await" is used (and rewriting loops etc as needed).
Really, all of this is just coroutines married to futures. In some other languages, you can take existing coroutine support and implement futures as a library (e.g. in Python, where "yield" becomes the equivalent of "await").
Not really sure what LINQ is exactly, but it sounds a bit like SQL embedded within a language. Couldn't you do the same thing with a library call? What about Perl being able to query and filter data that it has?
LINQ is a lot of different things that combine together. Let me try to go point by point.
The first piece is what other languages call list or sequence comprehensions. If you ever wrote something like:
[x*2 for x in range(10) if x % 2 == 0]
in Python, you've used that. LINQ uses a less concise syntax that is more reminiscent to SQL for the same thing - e.g. the example above would be:
from x in Enumerable.Range(0, 10) where x % 2 == 0 select x*2
in C#, but semantics are the same. That syntax is actually just syntactic sugar for a bunch of method calls with lambdas, with keywords mapped to methods with the same name - the expression above, for example, is translated to:
Enumerable.Range(0, 10).Where(x => x % 2 == 0).Select(x => x*2)
And Where and Select can be pretty much anything - e.g. the return type of Where doesn't matter, so long as it has a method called Select on it. Standard collection types provide implementations that do what you'd expect them to do (not directly, but rather via a mechanism called "extension methods", which allows you to add members to existing types - but this doesn't really matter to the developer using this API).
The second part are lambdas themselves. When you see x => x % 2 == 0, this has pretty much the same meaning as arrow-based lambda syntax in other languages using it (like Scala), with type of x inferred from the context in which it is used. However, there's a twist: instead of compiling this to a function and passing a reference to that, there's also an option of implicitly "quoting" lambda expressions as ASTs. Simply put, if the function you are passing the lambda to as a parameter declares the type of that parameter as Expression, then that function gets the AST for the lambda body, instead of compiled code.
This allows the function to interpret that AST in different ways at runtime instead of just compiling and running it (though that remains an option). For example, in LINQ to SQL and Entity Framework, the AST is converted to SQL, and passed on to the server. In general, the idea is that you have various LINQ providers (i.e. implementations of Where, Select etc) accepting ASTs and converting them to whatever query language is used by the backend that you're extracting the data from. Thus, you have a single uniform syntax for queries, that is also strongly typed (can also be weakly typed since "dynamic" was introduced in C# 4.0), but which can be handled in different ways depending on what data you're querying.
Except MS reallocated most of the Iron* and DLR devs to other projects, letting them fester...
Have you looked at some of those other projects, however?
IronPython itself is still alive and well, by the way, it's just not an MS project anymore.
I lost my interest when the killed Managed JScript, or whatever the DLR JScript runtime was called.
There was no such thing as a "DLR JScript runtime". There was JScript.NET, which is a compiler for JScript that output
You forgot Java's more theoretical advantages, which facilitate formal verification and allow for some pretty nifty JIT.
For example? The only thing that I can think as even remotely relevant here are checked exceptions, but this is the first time I've heard anyone claim that they allow for better JIT output, so I assume that you meant something else?
Then Microsoft asked "wouldn't this be better if it didn't suck?" So they worked on it. They "improved" on it. Thus was born the Windows-only J++ language. It was Java, only with extra stuff to make it work more Windows-y. Around the same time, Microsoft started considering the possibility of making the JRE also not suck. To that end, they hired Anders Hejlsberg and let him loose in the managed runtime labs to play with (torture?) JRE and VBRUN (oh, yes, we all remember VBRUN##).
J++ did indeed have additional language features, but they were not particularly "Windows-y". The only prominent thing that I can remember was the addition of delegates (basically, nominal function types) to the language.
What made J++ "Windows-y" was all the various extensions to the Java standard library that it had - in particular, Windows Foundation Classes. This was a UI library that wrapped native OS widgets (unlike Swing), but which wasn't platform-independent (unlike AWT) - it was very clearly and specifically about Win32. All in all, it looked a lot like someone took Delphi's VCL (which back then was by far the most popular RAD UI framework on Windows), and ported it to Java. It was also very similar to how VB did UI, although more powerful. As a result, you could write apps in it that didn't look and feel slow or alien to the platform, and a lot of people already had VB or Delphi skills that they could start using with J++. Swing, in contrast, had a much steeper learning curve, and visually apps using it were an eyesore to the users, not to mention all the UI lag.
The story with MS JVM is a bit different. Basically, when it came out, Sun was still struggling with performance of their own VM (they caught up with HotSpot maturing later on, and now they're actually faster than
When Sun sued and won, J++ was officially scrapped, but in practice a lot of that work was revived under
Fast forward a few years. Java is at 1.4, and everyone wants generics. That same year,
Your ordering is wrong here. Java got generics in 1.5 (aka 5.0), which was released in 2004.
What
This is completely irrelevant in this case. C# - the language - does have a standard. I haven't heard anyone complaining about how it's "not really a standard", like people were going about OOXML - and hell, there's at least one alternative implementation of the published standard in this case, which manages to fully cover it just fine.
"Everything should be made as simple as possible, but not simpler." -- Albert Einstein