Voicemail to text is probably the best evolution of voicemail. Speech to text has gotten very good, so there is no reason that we can't have the system perform a S2T on a message, then sending that message as a text. Keep the original recording around in case the translation is wrong, but delete it once the text has been deleted.
The C standard library provides an API to all your system resources.
The C standard library (libC) provides a very basic API to some of your system resources. You have to include a large number of other libraries in order to obtain a feature set similar to the Java and
And in addition to the IO, thread and math limitations that the AC above touches on, there are several other major problems facing the core C libraries: wchar support, qword support, socket support and overflow safe functions. There has been significant balkanization between the BSD, GNU and Microsoft camps on these topics, making cross platform development difficult. I've written a lot of wrapper code over the years dealing with the issue.
The nice part about the Java and
But I do still find the C libraries, Java framework and
My hope is that with the Core
Encrypting the content length header and adding an encrypted checksum (or cryptographic hash) of the payload would help detect JS injections, URL rewrites or other forms of malicious modification. Marking your user session cookie as HttpOnly should also help sandbox it from JS hijacking.
What happens when I just block the original response, pretend your session died, and serve up a bogus login page that gives me your credentials?
Introducing a new URL protocol for HTTP-Mixed could help prevent that. It would indicate that HTTP header encryption was a requirement and that the client refuses to proceed without it. So when the user hits refresh on their client after an hour, your bogus site would then need a counterfeit certificate in order to survive the PROT ClientSSL <-> PROT ServerSSL challenge.
The best way to deploy such a system would be to use HTTPS for your site's landing page. If the client's browser supports HTTPM, you could step down to it for pages deeper in your site. Otherwise, stick with HTTPS.
In some ways, HTTPM would be analogous to FTPES in the FTP/FTPS world. FTPS clients know to issue an AUTH TLS command shortly after starting an FTPES connection and refuse to continue if a FTP-503 Unsupported server response or a failed TLS handshake occurs.
Utilizing a client IP address as a means of identification is highly unreliable unless that client is on the same network as you. Proxy servers, cache servers and NAT devices can masquerade multiple devices under a singular IP address. Worse, some organizations load-balance outbound connections across an array of those masquerading devices. Every TCP connection could originate from a different IP address. The same is true when the client itself is multi-homed, such as a mobile device utilizing both cellular and wifi simultaneously.
And while the payloads of cookies can be hashed to obscure sensitive information that is stored in clear-text, it does not prevent the theft of the cookie itself. I may not know the true value inside of it, but I may not care. I might want it just to tailgate on an authenticated session. To avoid that, you need to encrypt both the cookie payload and its name.
For most sites, I don't really care if my browsing activity is being monitored. If some security service wants to eavesdrop on my visits to catfancy.com, let them. For the sites where I do care about privacy, HTTPS is generally an option.
But keep in mind that HTTPS alone only buys you so much. You're still leaking information about the sites you visit via your DNS queries. Also, you're still being tracked at the end-points by ad networks and other systems that log your moves. If privacy is that important, you should also be using an anonymizing proxy service like TOR.
Encryption has a cost, it isn't free.
Agreed. For most sites, there are only two areas where I care about encryption: 1) login authentication and 2) session tokens (cookies). For #1, briefly switching to SSL/TLS is no big deal.
The problem today is that there is no satisfactory solution for #2. In order to encrypt your cookies in your HTTP header, you have to encrypt everything. As previously mentioned, this can have some adverse side effects. It is also complete overkill. What HTTP needs is a middle option.
Enter explicit HTTPS.
When a client requests a protected URL, it can be given a challenge and negotiation method for TLS not unlike how NTLM authentication over HTTP occurs. It should also negotiate what HTTP headers should be private. When complete, the client then sends encrypted data using a PROT: [session id] [base-64 payload] header. If you wanted to be fancy, you could make the system tolerant of upstream proxies or load-balancers inserting their own cookies.
Now you have a system where your session tokens cannot be eavesdropped upon, but yet the payload of the HTTP request can be cached.
Perhaps advertisers should finally move away from the current revenue system that pays per-click and should instead move towards a profit sharing system where the referring website receives a commission based on any sales or executed transactions.
I've been reading about click fraud for over a decade now. I don't expect it to go away under the current system.
...someone else will develop a list...
Which is why I believe that the whole exercise is futile. Suing Eyeo is not unlike playing Whack-a-Mole. If they are forced to remove their app, others will simply take their place. Given that Ad Block has already forked development lines (see: Adblock Edge), they're already too late.
IANAL, so I'd like a tort guru to enlighten us on exactly how creation and distribution of a product (AdBlock) that that gives consumers an informed choice over another product (advertising bullshit) is an actionable case.
I'm also curious how much Eyeo opened themselves to litigation by offering a for-profit whitelist that overrides the blacklist instead of sticking just with a blacklist-only model.
It sounds like a water utility company suing faucet makers for making a device that restricts flow of billable water, or the electric company suing light switch manufacturers.
Or like how AT&T used to prohibit third party phones on their lines?
The main difference here is regarding the level of exclusive ownership rights the publisher has versus the public good in relaxing those rights. Many governments have rules allowing small quotes and allowing parodies when it comes to published content. But ad skipping is somewhat murky. Over on the TV side, it is assumed that the Betamax timeshift ruling provides some protection (which the SonicBlue DVR lawsuit would have clarified had it continued). But I'm not aware of anything on the published side.
But it is a PlayStation One system (well sort of).
Poor analogy. That would be like saying that the Macintosh Classic is sort of an Atari ST just because they both used Motorola 68000 processors.
As for the minimalistic nature of the Mongoose-V (MIPS R3000 based) processor in the NH spacecraft, it is more than adequate for an embedded processor. My Sony NEX camera uses a Bionz (also MIPS R3000 based) processor for image processing and user interface controls. The clock rate of the Mongoose-V might seem a little low, but remember that the spacecraft is both power and uplink speed limited. Having a faster processor really wouldn't gain much.
I've often wondered. Suppose you had a time machine, went back, took some random person from the year 1900, and brought them to the present day. How would they fare in the modern world? My guess is that there would be a big adjustment period but they would manage. How about a person from 1850? 1800? 1700? At what point would the person be so totally lost in modern society that they wouldn't be able to function at all.
If you want an example, look at how refugees from poor rural areas in third world countries handle the transition when they arrive in a first world nation. You often have massive language and cultural barriers. First hand knowledge and use of technology is going to be limited. They're going to know little to nothing about our laws. If you just drop them into the middle of NYC, they will do very poorly.
If you put them into an orientation program and assign them to a handler who will bring them up to speed, they'll probably do alright. It might take a decade before they're comfortable in their new home, especially if language was a barrier, but it will eventually happen. There are millions of examples all throughout the western world of this happening. People adapt.
Eventually Obama is going to be a civilian again. If he pleases the right people, he (or his immediate family) can make tremendous amounts of money as a lobbyist, consultant, guest speaker, etc...
Just look at the money that Chelsey Clinton earns from her array of jobs at various consulting, investment, educational, media and humanitarian companies and organizations. Her success was handed to her on a diamond platter as political thanks to her parents.
Tech, agriculture, service industries, foot services, etc. all benefit from the well behaved illegals.
You mean that their owners do. We just added millions of mostly uneducated people to the workforce. If you're in a low skill job and you dislike your wages, hours or working conditions, management will gladly and easily find a replacement.
This sucks for anyone who is entering the workforce or who lacks the proper skills or aptitude to crawl out of the bottom. As if unemployment and underemployment for those people wasn't already bad enough.
Obama just set the war of poverty back by about twenty years.
There was a period of time during the GCC->Clang transition where a lot of stuff didn't build, but those days are long gone.
If you stick with the Ports collection, using Clang is fairly safe if you're on 10.1 and you keep your Ports db up to date. The problem is when you stray outside of Ports, or you find one that really needs GCC (or worse, a newer version of GCC).
The last compiled version of GCC included with FreeBSD was 4.2.1. You can build newer versions using the Ports collection, but then you have to make a decision to keep two versions installed. There is also some hassle regarding which shared libraries to use.
I had a package that really wanted something newer, so I installed gcc48. It took me a few hours, but I finally got it shoehorned in. Ugh. I'll stick with packages that are happy with Clang.
In FreeBSD, network configuration data is stored in the
If you want to manually set the IPv4 address of an interface, you could use:
ifconfig_xx0="inet 192.168.1.10 netmask 255.255.255.0"
If you're using DHCP, remove the default router line and set the ifconfig string to "DHCP".
You can also use the command line tool sysinstall to set network options.
Also remember, FreeBSD uses network driver specific interface names. So instead of eth0, eth1, eth2, you can have fxp0, em0, and de0. If that's not your thing, you can always create an alias: