Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×

Comment Re:was it intended to be secure? (Score 1) 97

"limited experience with a tool-poor scripting language..." - which are you referring to, Ruby? If so, Ruby is not tool-poor.

"...but in return, a lot of problems become quite a bit easier to solve." - Yes, I agree with you. Perhaps our disagreement is our perspective: I advise organizations, and so I tend to be on the side of maintainability - and that requires languages and tools that are naturally maintainable - not ones that require great effort to craft maintainability. I think that you advocate for the developer - and particularly the advanced developer. Yes, I agree that scripting languages enable you to code more quickly, although I have found the refactoring can introduce lots of bugs with scripting languages, unless you have a very high coverage unit test suite, which I try to avoid, and compilers help me to avoid that - saving me a huge amount of effort, and instead allowing me to focus on behavioral tests which are far more stable when one refactors.

I will note that I have seen very, very expert developers create mountains of unmaintainable code very rapidly, and not even know that their code was unmaintainable.

Comment Re:was it intended to be secure? (Score 1) 97

Aha. Now I know where the disconnect is in our discussion on this. I have been thinking in terms of updates, and you have been (it sounds like) been thinking in terms of fetching data. Yes, for fetching data, you are right, asynchronous is far more efficient, if one can get away with a best effort (eventual consistency) approach, which is usually the case for UIs.

For transactions that do updates, a synchronous approach is far easier to implement, because one does not have to keep track of application state, because (1) one handles failures immediately, and (2) the transaction is atomic (if it is not, then you have to manage state at the application level). E.g., consider a user who reserves an airline seat, but between the time the user received notice of the available seats, the selected seat is given away. The user does not now the seat was given away (and their UI has not refreshed yet), so they click Submit to reserve the seat. In a synchronous approach, the Submit will fair right then, and so their UI will immediately receive a failure response and can update it self accordingly. But in an async approach, the user will receive a success response, and might even close their browser before a failure message is received. Thus, the application then has to have additional logic to record that the user was not notified of the transaction failure, and will likely have to email the user to let them know that their seat reservation failed. Much more complicated.

I.e., message oriented is simpler for getting data, but synchronous is simpler for updates. Do you agree?

Comment Re:was it intended to be secure? (Score 1) 97

Go and C++ are so different, and C++'s type safety might be stricter, but the type safety of Go is pretty strict. Nuances aside, thus practically speaking, I have found that languages like Ruby lead to very unmaintainable code. That was my point. Dynamic type features (which Go has to some extent) don't change that, because one uses those to add dynamic features to one's application, such as adding a new component a runtime, or dispatching to a method based on dynamic information such as a command that has been input. Dynamic type features are not usually used throughout a program, but rather only in special circumstances. But I have found that Ruby's lack of type checking can lead to very fragile code when one refactors.

Comment Re:was it intended to be secure? (Score 1) 97

"The asynchronous approach is much, much more complex to implement on top of an RPC system." - can you please give an example? I have implemented message based programs - and you are right, that it is a programming construct independent of the network - but IME message based applications are very complex to design: one must identify all of the states. But I am willing to learn! Thanks!

Comment Re:was it intended to be secure? (Score 1) 97

Yes, C++ is probably not for most programmers. To use it well, you have to spend a-lot of time with it, and do a-lot of reflection (reflection in the sense of mentally thinking about it). And you are probably right about Google not changing its attitude on dynamic versus static. But doesn't that say something? They have to handle very large things - they have had to from the beginning. The fact that they stay away from dynamic languages - what does that say? I guess you can tell that I am not a fan of dynamic languages. I started my career writing compilers and so I feel strongly about the benefits of static analysis. I have spent a-lot of time writing vagrant and chef scripts (not anymore - vagrant and chef are obsoleted by docker and orchestration), and I remember the pain of that process - wishing that there were a statically compiled alternative, so that I did not have to run the scripts, wait for the VMs to boot, and then get through all the provisioning just to find that there is a syntax error somewhere in my Ruby. Another experience: I wrote a large application (a performance testing tool) in Ruby, over a period of about six months. I then ported it to Java, and in the process I discovered countless issues that could potentially be problems down the road - issues that I only discovered because the Java compiler found them - things that would have required a comprehensive unit test suite for the Ruby version to find them. A third experience: Over the last year I have been working mostly in Go, and while I don't like Go _at all_ it has some things going for it: one if them is that it is pretty strongly typed, and I have found that I can do massive refactoring of the Go codebase and introduce _zero_ new errors as a result - try that with a dynamic language!

Comment Re:was it intended to be secure? (Score 1) 97

"Not that I know of. And what would be the point? That would amount to a REST call with metadata attached in PB format, which is kind of like a bicycle for fish." - that would indeed be ridiculous, but I would expect the attached binary content to be unencoded, as it is in an HTTP binary encoded part. There is a major use case for that: queries that send binary data. E.g., I have been using the docker engine and docker registry REST APIs, and many of the methods include both query parameters and binary object parts (i.e., attached files) in the same request or response. I think we need to look at the gRPC specs to see if it handles this case.

"Asynchronous message passing refers to a style of programming similar to (but different from) OOP. It really has nothing to do with UDP other than a superficial similarity." - I disagree. The debate between synchronous calls and messaging is as old as the Internet. You are right that these do indeed distinguish different programming paradigms: in synchronous calls one handles the response inline, whereas an asynchronous approach requires and event-oriented design with compensating transactions at the application level. The asynchronous approach is much, much more complex for the programmer to implement, because of the large number of states that must all be handled, but it is warranted when requests have lots of latency or when there are lots of clients compared to the number of servers - so many that it would be difficult to maintain that many stateful connections. With an elastic back end, however, the latter concern goes away.

Comment Re:was it intended to be secure? (Score 1) 97

"What works for Google (or the DOD, or IBM) doesn't work for most other companies, projects, or programmers, because they operate under a completely different set of constraints." - that is VERY true.

I agree that C++ is too complex. The problem is, alternatives are even worse for other reasons. Ruby is HORRIBLE from maintainability and performance points of view. To write maintainable Ruby, one has to use TDD, which is deeply incompatible with how many people think. (See the debates between David Heinemeyer Hanson and Kent beck: http://martinfowler.com/articl...) And I have used Ruby a-lot, so I am not guessing. And Go is really awful because it throws out enormously useful features like virtual methods and exception handling, and Go also has lots of "gotchas" - my favorite is comparing a reference with nil and getting false, then type asserting it, comparing the asserted value with nil, and then getting true. Things like that will result in lots of bugs, and there are many gotchas like that. So I have turned back to C++ because, if used well (i.e., conservatively), one can write very efficient and maintainable programs - IF used well. Efficiency matters at Internet scale because if a Ruby program requires 10,000 VMs but a C++ program requires only 100, that is a very large cost saving.

Comment Re:was it intended to be secure? (Score 1) 97

"not designed for large messages...": Hmmm - isn't there a way to attach a file - i.e., a MIME "part"? Since PB uses HTTP2, it would be hard for me to imagine that they left that out. But if you are right, I agree it would be a terrible problem. Perhaps attaching files is part of gRPC but not PB?

Not sure I understand your comment about non-copy memory transfers, since PB/gRPC are remote (out-of-process) communication tools.

Yes, you are right, that message passing (e.g., UDP) is more scalable when one has a single server. But if you can massively scale the servers, the limitations of RPC-like communication go away.

Comment Re:was it intended to be secure? (Score 1) 97

Another thought: I don't want to dismiss what you said above about Internet scale: it is actually quite insightful: "The 'Internet scale' we are talking about here is millions of different clients and servers..." It is true that for for query applications, the REST model is logically a good one. However, the REST protocol (i.e., HTTP with character data) is horribly inefficient. If one compresses it, that helps a-lot, but that is really a workaround. gRPC uses (by default) Protocol Buffers, which is reportedly ten times more efficient/responsive in terms of bandwidth and latency. Also, for applications in which the user performs updates to data, an RPC model makes more sense. Once can use gRPC with a REST-like paradigm: it all depends how one defines one's responses. It is kind of moot though because programmers use toolkits like React and Angular, and a component-oriented approach has become popular today (we are back to a popularity contest) - driven largely by the frameworks that embed a "react" pattern - i.e., the MVC is gone, and each component performs transactions against a back end microservice. For this pattern, gRPC and microservices in containers is a perfect combination - and Internet-scalable.

Comment Re:was it intended to be secure? (Score 1) 97

Stodgy? Some of the languages that I have used extensively over the years (more or less chronologically): Basic, Fortran, Algol, PL/I, Pacal, Ada, C, Module2, C++, VHDL (I helped to develop this language, and wrote compilers for it), Java, Ruby, Go. Other languages that I have used here and there: Lisp, Prolog, Python, AspectJ, Scala, Groovy. Which are the most productive for an organization (not an individual) over the long term? Without a doubt, Java. Reason: It is by far the most maintainable and refactorable as teams turn over. Second choice: C++. Some of the worst for maintainability: all the dynamic languages, and Go. I think that Google knows what it is doing, and the fact that it defies many of today's trendy tools tells me that just maybe those tools are popular because they are popular - not because they are good. As Alan Kay has said, "Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture..."

Comment Re:was it intended to be secure? (Score 1) 97

It seems to me that REST tries to solve a problem that does not exist. Programmers want RPC. The notion of REST is too abstract for most programmers. Also, Google's internal systems are Internet-scale - they are the ones providing that scale! Of late, Google has turn away from several current cherished paradigms, including REST and dynamic languages, returning to older concepts that have stood the test of time.

Comment Re:was it intended to be secure? (Score 1) 97

Yes, and before SunRPC, I remember that Apollo Computer had an RPC toolkit. Actually, CORBA did work - quite well. I used it a-lot back then. XML based messaging came along - way before Internet scale was a concern - because it went over HTTP, thus "tunneling" through firewalls. CORBA required you to open ports, and sysadmins would not do that. From there, the nightmare of WSDL emerged, and then REST replaced WSDL, and programmers signed with relief because it was so much simpler. By that time, the OMG had updated CORBA so that it could to through firewalls, but it was too late: since IT is fad-driven, CORBA was "out of style". Then "internet scale" started to become an issue. And Google has found that REST cannot meet the needs of Internet scale, so they have developed gRPC, which is just like CORBA, and is (like CORBA) much, much faster and uses much, much less bandwidth than REST. Google's clearly stated reason for developing gRPC is that REST is too slow and uses too much bandwidth.

Comment Re:was it intended to be secure? (Score 1) 97

"The first "web browser" was actually a WYSIWYG editor" Which one was that? Are you referring to Mosaic? If so, I did not know that it had editing capability. As I recall, REST came along as a response to SOAP, which was overly complex for what people were using it for. The most common feature of SOAP was SOAP-RPC, so it was natural for people to want to use REST for that. I 100% agree that HTTP is being misused - and REST as ell. What we need is a protocol other than HTTP for remote procedure calls. Unfortunately, everyone assumes that every new application over the Web must use HTTP. Do you recall IIOP (CORBA)? Sysadmins would not let it through their firewalls, so people turned to HTTP in order to be able to do remote procedure calls over the Internet. People need and want RPC for a myriad of applications. REST and HTTP are terrible for that. IIOP exists, but no one uses it. Now we have gRPC which pushes RPC over HTTP (without REST), but it is still using HTTP. This sidesteps the issue but it might be the best we can hope for.

Comment Re:was it intended to be secure? (Score 1) 97

Well I must misunderstand REST then! Although every single REST project I have been on has treated REST as an API syntax. But is this splitting hairs? I know that the concept is that one transfers state from one place to another. But in practice the paradigm is driven by the UI framework (e.g., Angular, etc.), and given the component-oriented react oriented patterns of today, one is not transferring state: one is making API calls. That is what apps need, and they are using REST because of AJAX, and it has produced a mess. JSON is a maintainability nightmare, for example, with people specifying data structure syntax by showing examples, instead of using something complete and normative like BNF (or JSON schema now that there finally is such a thing).

Slashdot Top Deals

Q: How many IBM CPU's does it take to execute a job? A: Four; three to hold it down, and one to rip its head off.

Working...