Comment 1% failure rate is horrible? (Score 1) 115
so 200 million documents reviewed with a "less than 1% failure rate" (whatever that means) is 2 million times a student will be falsely accused of "cheating"
so 200 million documents reviewed with a "less than 1% failure rate" (whatever that means) is 2 million times a student will be falsely accused of "cheating"
that being said criminals love US dollars and the shitcoins pegged to them
So do you use `apt` or do you build your linux images by hand from source? Do you use virtualization or containers or do you hand deliver your servers to the data center and then install a hard drive containing an image with a full stack that you built from sources? Most of us being effective in the IT / DevOps space are using tools to bring in other dependencies because it enables us to make repeatable on demand CI/CD systems. There is such a thing as acceptable risk and there is a reason large enterprise use security scanning and subscription tools to identify vulnerabilities in the release chain.
To each their own I suppose. I've been programming for 30+ years going all the way back to assembly.
C is great because it runs close to the hardware and it has function pointers. However C also has buffer over-runs, no built-in memory management, no closures. It's great for device drivers and 20 year old embedded systems but to do anything modern enterprise on C you'd be piling a heaping load of libraries on top anyway. You're welcome to your opinion of course but I tired of managing C projects a long time ago. The only time I do C anymore is when writing Node bindings
Since we're on the subject of bad programming languages I am going to bring up Java. I wrote Java professionally for a large enterprise for ~ 7 years. Only very recently did Java gain closures and that was long after I stopped using it. No function pointers. At least it has closures and anonymous classes. But it always felt like doing anything in Java was so much typing. My least favorite language.
C++ had a lot of problems early on, but after C++11x std lib I'd say that it's pretty good. Closures & smart pointers allow for modern delegate patterns and error proof memory management.
My favorite language was until very recently LISP. I guess I was drawn to Javascript because it felt like a most imperative and straight forward way to do a lot of the same things you can do in LISP but with the modern C/Java syntax.
That all being said, I for one enjoy the capabilities of modern Javascript especially for writing purely functional programs. I've been using Typescript lately as well. I've rarely worked on small projects that didn't already have a mile high pile of tooling and libraries on top no matter what language I've worked in. I'm used to running my code through preprocessors, compilers, linkers, testing and packaging tools. For me it does not feel strange to have linters, transpilers, testing and webpack.
I will also add that having picked up Javascript and NodeJS has been immensely successful for the career. It's a great job market out there for NodeJS pros
Javascript (and by extension NodeJS and the entire ecosystem) only gained popularity because of its position as the language of the web.
> It has 30k+ packages
There are over a million packages in NPM
> they're barely curated at all for quality and safety
They are not curated at all. It's an open registry that anyone can publish to.
Say what you will about Javascript, NodeJS and NPM-- it is by far the most popular programming language today. The sheer amount of interest in the language have brought forth many proposals and improvements to the language to incorporate many of the best aspects of other languages. Javascript has changed a lot over the past 5 years and after a lifetime of dismissing Javascript myself I must say after jumping in I've had a blast.
I started working with docker containers about 5 years ago and have followed the trends of docker compose and then kubernetes. As someone who previously spun up either bare metal machines or raw VMs, installed everything needed, and tended to vertically scale machines to match demand-- working with containers, microservices and horizontal scaling has taken a lot of time and effort to learn and be able to "do right".
I would never go back to Chef/Puppet/Vanguard installations at this point. I feel comfortable composing isolated containers, putting everything in its own container, and connecting everything together with a secure private network. As others have said, it can be extra overhead to designs systems this way, and yes there is additional overhead to running isolated containers, but the value you gain is in the ability to upgrade in isolation, scale out as well as deploy locally.
At this point, I won't work with companies or clients that will not dockerize their deployments. I admit I tend to view these outfits as Ludites typically clinging to their custom "only I understand it" deployment scripts and/or have misguided beliefs on what docker containers are used for. This is fine because most outfits either already deploy containers or want to.
If you working in web development, IT or DevOps and are not already familiar with containers, I would recommend you start researching. At this point I consider container based deployment an essential skill.
Be able to understand and evaluate Big-O notation and use hash maps and sets. Which if you don't already know, you should!
I have built and seen succeed and crash-n-burn spectacularly a variety of automated test frameworks for large enterprises. Let's start with the successes:
- High availability / Robust
- Staging environment for automated test developers
- Performance metrics
- Easy to understand test results
The failures were due to:
- Brittle and poorly designed tests which don't run the same in the CI system versus the tester's machine
- Testers committing bugs into the test environment
- No performance metrics
- Hard-to-understand failure results require duplication and deep analysis
As you can tell, the failures are the opposite of the successes. Allow me to further explain.
The most important item is that the tests always work and always be running. This means test machines and back-up test machines. Running the same test on three different machines is even better because you can throw away or temporarily ignore outliers. Outliers need to be addressed eventually, but for day-to-day developers and managers just want to know if the code they committed causes failures. Having the test be in any way unreliable causes faith in the tests to disappear. The tests must run, and they must run well. Test environments go down or require maintenance and you want to be able to continue to run tests during these downtimes. Treat the tests like a production system.
Next, a big improvement I've seen is to have automated test developers contribute new work to a separate-but-equal staging environment. Automated test teams run on an Agile/Scrum iteration and only "release" their new tests at specific times. Another thing which reduces faith in test results if the tests are breaking due to the fault of automated test developers-- which happens all too often.
Automated tests are the ideal platform for generating performance metrics.
Lastly, a big pet peeve of mine is for understandable test failures. Test failures should obviously describe the set-up, expected and actual result. If test failures are obtuse and require a lot of time to analyze and triage-- that is wasted time that could have been spent fixing the root cause.
Good luck! If past experience is any indicator, you will be spending far more time and money than you ever imagined to create a robust system that developers and managers will have faith in.
Though at the time, I was a long time user of Commodore Amiga. Most PCs at the time were extraordinarily difficult to configure and keep running. I remember the multi-tasking in Windows 95 being really bad-- explorer.exe getting blocked. Other things that stick out to me were over use of modal dialogs and that lower-right notification tray filling up with animated distracting icons.
Don't get me started on clippy, or DOS, or file system naming conventions. Sure, compared to Windows 3.1 it was bliss-- but other computing platforms were years ahead.
And? If there's no middle man then ultimately someone (in this case it sounds like the buyer) has total control over the transaction. It doesn't matter what UPS says, if they don't want to release the funds they don't have to.
In a dark market like this the ONLY protection you have against fraud is the other party's reputation.
Did you even read the article? It describes how a third party (arbiter) is agreed to by each party. It takes 2 out of 3 signatures to finalize the transaction (minus arbiter fee).
I am mostly interested on when the phone will be released for USA markets and what bands it will support.
Activities are just workspaces. They might offer a little more customization than a workspace-- but that's all they are.
I used XFCE for many years on this same machine-- without compliant. What I liked about KDE is that it ran as well as XFCE and came with everything I needed for Qt development. Also I don't need to double-load application frameworks. Iv'e been able to use KDE or pure Qt apps for everything I need.
As for drop-shadows, bouncy cursors and business-- that's both accepted and came be done tastefully. (**Cough** Unity **Cough) Like any environment, customization is both available and desirable. IMHO it was easier for me to get KDE where I wanted then Unity or XFCE
This is a big win for the Qt ecosystem. Between KDE libraries reworked into portable Qt modules and official iOS and Android support even with support from Digia-- Qt is gaining momentum. They even managed to survive being gobbled by Nokia, then being sold to Digia-- it has been a bumpy ride.
I recently tried out the latest Kubuntu and have been loving it installed on an old Dell D410 (12inch, 1.8Ghz SC Pentium, 1.5G RAM) laptop and it runs well and does everything I need (which in this case is Qt related application development
> As I'm getting back into C++ after almost 20 years and trying to start with Qt, I'd love to see some practical examples.
Sure. new signal slot syntax covers how to create anonymous lambda handlers for signals. No longer is it required to create methods on our public interface for a slot. This is pretty much The Big News (tm) when it comes to closures in Qt. But in general, I have late taking towards creating methods and interfaces which expect and rely heavily on callbacks-- and the new C++11 lambda expressions are now a very terse way to accomplish callback mechanisms in C++. Conceptually, it has always been possible, and the C++ 11 lambda extensions are effectively code inliners to accomplish the same task as creating a class to wrap state with a callback method-- which needed to be done by hand previously or attached to an existing class (further muddling its interface).
There can be no twisted thought without a twisted molecule. -- R. W. Gerard