NSA: Please Turn off the Lights When You Leave. Nothing to See Here.

Linux Advocate Dietrich Schmitz shows how the general public can take action to truly protect their privacy using GnuPG with Evolution email. Read the details.

Mailvelope for Chrome: PGP Encrypted Email Made Easy

Linux Advocate Dietrich Schmitz officially endorses what he deems is a truly secure, easy to use PGP email encryption program. Read the details.

Step off Microsoft's License Treadmill to FOSS Linux

Linux Advocate Dietrich Schmitz reminds CIOs that XP Desktops destined for MS end of life support can be reprovisioned with FOSS Linux to run like brand new. Read how.

Bitcoin is NOT Money -- it's a Commodity

Linux Advocate shares news that the U.S. Treasury will treat Bitcoin as a Commodity 'Investment'. Read the details.

Google Drive Gets a Failing Grade on Privacy Protection

Linux Advocate Dietrich Schmitz puts out a public service privacy warning. Google Drive gets a failing grade on protecting your privacy.

Email: A Fundamentally Broken System

Email needs an overhaul. Privacy must be integrated.

Opinion

Cookie Cutter Distros Don't Cut It

Opinion

The 'Linux Inside' Stigma - It's real and it's a problem.

U.S. Patent and Trademark Office Turn a Deaf Ear

Linux Advocate Dietrich Schmitz reminds readers of a long ago failed petition by Mathematician Prof. Donald Knuth for stopping issuance of Software Patents.

Showing posts with label Google Code. Show all posts
Showing posts with label Google Code. Show all posts

Friday, June 6, 2014

Google's End-to-End is Unacceptable

by Dietrich Schmitz



Regular readers will know that I have taken issue with Google since last year on how they manage Gmail and Drive.

For starters, should any governmental agency manage to break through Google's firewall (oops, the NSA did and pitched camp last year), they will have unfettered access to your meta data and direct access to your Gmail and Drive files. (Image right: Google's End-to-End Logo)

Why?  Because they are stored in clear text (unencrypted) format.

That's odd.  Google Cloud does just the opposite.  Hmmm.  I Wonder why.  (Taps fingers.....)  That's because Google Cloud is for the 'paying customers' who INSIST that their data meet critical mandated security thresholds (FIPS).  So, Google Cloud customers, in the interest of keeping them from leaving altogether, are being assured, by Google, their data is FIPS-compliant and cannot be viewed by third-parties.  How nice of them.

When it was determined last year that the Fox is in the Hen House, many corporations left en masse U.S. domestic cloud ISPs for Western- and Eastern-Europe ISPs to avoid the NSA.  This concern is quite understandable on many levels and still nothing has been done to impede, much less stop the NSA from continuing their global eavesdropping.

Gmail and Drive are considered part of Google's consumer-facing services which are, at present, offered for free.  Most everyone using Gmail likes the fact that they get it for free, but, were they to make the effort to read their 'Terms of Service' agreement, would discover that Google reserves the right to parse any and all meta and personal clear text data belonging to the respective account holder.

Principally, the main thrust of this stipulation is so that Google can use intelligent advertisements positioned in the account holder's Gmail gutter margins that reflect subjects which might be of potential interest to said account holder by virtue of the parsing logic applied to their data stream.  Very nice, yes?  No!!!!!!!!!!!!

This is fundamentally wrong.  Users may be stuck with the current terms of service for getting their free Gmail and Drive, but, do they have a recourse?

Certainly, one option would be to drop using Gmail and Drive entirely in favor of some other solution.

Another solution is being provided by Google who have been under great public pressure to do something to protect account holders' right to privacy.

The solution is being named End-to-End in an announcement posted on Google's website.  It's not even available yet and coding for the solution is being worked on and tested before it will ever reach production release to the general public.

While that may sound good, a cursory inspection of the Google Code website reveals a few issues which I feel make this solution unacceptable from the start.

1) Google is only offering 'the solution' as a Google Chrome browser extension.  Many use Chrome.  I don't because it is 'proprietary'.  That means it is not 100% open source and so violates one of the cornerstones of FOSS: Transparency.  We cannot and do not know what is or isn't in proprietary code and because of that, potential rogue code and abuses can be introduced without the general public's knowledge and/or approval.  That is what Transparency is all about.  So, Google wants you to have 'their' solution on 'their' terms, stipulating the use of 'their' browser which in and of itself has volumes of code nobody can claim to know or understand.

2) As if #1 wasn't bad enough, Google has chosen to 'reinvent the wheel'.  Namely, the long-standing, mature, fully-debugged gpg2 open source OpenPGP standard codebase is being rejected out of hand, again because they want to do things 'their' way by creating a duplicate, immature, bug-laden codebase port of gpg2 as an incomplete subset into slow, interpretive Javascript.  That's right.  Javascript.  gpg2 is fully compiled C/C++ code.

3) Google chooses to adopt a new Eliptical Curve cryptographic standard over the proven mature RSA standard.  Recall that NIST is now in a public relations dilemma having been exposed as consorting with the NSA in introducing 'weakened' cryptographic string constants into their ECC codebase last year.  In discovering the problem with ECC, the NIST insist they had no part or knowledge of the NSA's intentional introduction of weakened code and put the code out for public review and follow up action to correct any seen defects based on public comment.  That leaves a 'cloud' in my mind over any software dependent on EC.  In terms of severity, in comparison to items 1 and 2, a thorough audit of EC might restore confidence and make item 3 less an issue in the long-term.

But fundamentally, Google's developers, it would appear, are taking shortcuts and making fundamental flawed decisions by forcing a solution which requires proprietary Chrome (Transparency violation) and creating their own immature crypto codebase to 'emulate' a subset of gpg2 OpenPGP features.  EC will only be compatible with version 2.1 of gpg2.

I am giving this project a 'thumbs down'.  Unacceptable.  Back to the drawing board Google.

-- Dietrich
Enhanced by Zemanta

Friday, June 21, 2013

Open Source Downloads An Endangered Species


With news this week that GitHub is banning storage of any file over 100Mb and discouraging files larger than 50Mb, their retreat from offering download services is complete. It's not a surprising trend; dealing with downloads is unrewarding and costly. Not only is there a big risk of bad actors using download services to conceal malware downloads for their badware activities, but additionally anyone offering downloads is duty-bound to police them at the behest of the music and movie industries or be treated as a target of their paranoid attacks. Policing for both of these -- for malware and for DMCA violations -- is a costly exercise. (Image credit: iconseeker.com)

As a consequence we've seen a steady retreat from offering downloads, even by those claiming to serve the open source community. First GitHub bowed out of offering the service, claiming that it was "confusing" for the clients. More recently Google followed suit, bringing Google Code Download services to an end. They stated that “downloads have become a source of abuse, with a significant increase in incidents recently”. Community reactions to this have been mixed.

GitHub didn’t have an alternative plan for it’s users and clearly has no desire to be a full-service community host. Google suggested using its Drive cloud file storage service to host files, though this is clearly far from ideal as, for a start, no analytics are available for downloaders. Small projects are left with a rapidly decreasing number of options. They could pay of course, for S3, but for a free downloader solution SourceForge seem to be the only high-profile answer. SourceForge are doing everything in their power to make it easy for users of Google Code and GitHub to transition across to their service and GitHub have even included a link to SourceForge in their help pages, recommending them as a viable alternative. SourceForge assures us that they have no intention of shutting down their upload/download services at all.

SourceForge providing an alternative is potentially handy for those whose projects would otherwise be held up by this lapse in services and they will no doubt welcome the wave of new users. The issue shouldn’t be coming up at all though. Confusion for and abuse by users may sound like reasonable pretexts, but perhaps the real problem encountered by both the closing services is a somewhat less reasonable one. There’s a growing expectation that they should regulate the downloads, acting the part of police on behalf of copyright holders.

The pressure to behave that way, whether through a desire to preserve a safe harbour status or simply to tread carefully in the eyes of the law, is an unreasonable hack that appears to mend copyright law online but in fact abdicates the responsibility of legislators to properly remake copyright law for the meshed society and over-empowers legacy copyright barons. These changes to downloads are an inconvenience for open source developers, but should serve as a warning to the rest of us that the copyright system is beyond simple patching.
Enhanced by Zemanta

Monday, April 29, 2013

Turbo Charge Yum with Fastest Mirror and AxelGet Plugins

by Dietrich Schmitz

Holy Crap.

I just finished adding AxelGet to Fedora 18.

What does it do?  


Well, it speeds up your yum downloads by opening multiple ports in parallel, which in turn speeds up your update experience, dramatically.

Now, I've always done my iso downloads using Axel which is the same protocol for AxelGet called by yum.  The axel protocol is arguably faster than any bittorrent.  Just pass the url of a given file which you wish to download to axel in a terminal and off it goes on a tear burning rubber as it starts.

Axel is in the Fedora repository, but if you are currently using another Distro, you can right-click the above link and download a tar.gz of it.  You'll need to untar it to a directory, cd to that directory, ./configure, make and sudo make install to have it available on your machine.

So, I am recommending that the Fedora Team put AxelGet into their standard install as it should be there alongside all the other plugins like Fastest-Mirror.

In this article you'll learn how to install both Fastest Mirror and AxelGet plugins into Fedora 18.

Fastest Mirror


To avail yourself to using Fastest Mirror, open a terminal window and type:


$sudo yum install yum-plugin-fastestmirror

The point of this plugin is for yum to locate a repo mirror which is fastest for your download, presumably one closest to your geographical location.

AxelGet


AxelGet is found in the Google Code repository.  Essentially you will need two files, one being a configuration file, axelget.conf and the other being a Python yum plugin, axelget.py.

Essentially, we need to to download each.  Right-click the links above and save into your ~/Downloads directory.

From a terminal window type:

$cd && cd Downloads


Then axelget.conf must be copied into /etc/yum/pluginconf.d/ directory.  From a terminal, type:

$sudo cp axelget.conf /etc/yum/pluginconf.d/

File axelget.py needs to be copied into /usr/lib/yum-plugins/ by typing the following:

$sudo cp axelget.py /usr/lib/yum-plugins/

You can test the speed now of axelget by installing any application.  I chose LibreOffice.  Remember to observe all rules of the road and traffic regulations.

Always use yum responsibly.

-- Dietrich
Enhanced by Zemanta

Monday, March 25, 2013

The JSON API: An Example

by Dietrich Schmitz

I'm not one to let things go.  If I can't figure something out, it simmers and brews, sometimes for days, even weeks at length until I get an answer.

That was my experience with JSON.  I'm not a web uber geek by any stretch of the imagination and have spent over two decades doing IT programming all without coding a line of html.

I am quite fine with that.  But I was confronted by what seemed to be a simple exercise in configuring this website: having it return a list of posts 'by Author'.

Well, no.  It turns out that there isn't a lever, toggle, switch one can pull to make that happen.  Yes, a post does indeed have category tags, but to have that work reliably one must explicitly append a 'tag' to the post, i.e., the name of the Author.  One omission will result in the query missing a post.  That isn't acceptable.

So, I thought there must be a way to get that seemingly basic information from a post. Yes?

Yes!  It turns out that Google has an array of APIs to their product line and, categorically, Blogger has its very own.  It's called Javascript Object Notation or JSON for short.

This is good news. Yes?

So, I proceeded to pour over the documentation in earnest hoping that if I stared long enough and tried various API calls I could get it to cooperate.

This went on for over two weeks with my poking at it periodically without success.

Finally, I decided to try a simple test to see what Google's API would return using JSON.  It is pretty basic:



  
    Blogger API Example
  
  
    


When I saved the html and opened the test.html from my local share, it returned to the screen 'Undefined' 'Undefined' as if to taunt me once again.  I was taunted.

So, I double-checked my settings in the Google API console, even recreated my apikey just to be sure and then took just the RESTful portion (I've obfuscated the api key as 'xxxxxxxxxxxxxxxxxx'), and ran it from a terminal command line to see what was in the response.  It returned 200 OK but the response included information I had not seen up to this point, a clue to what was keeping my simple JSON query from giving over the goods:


// API callback
handleResponse({
 "error": {
  "errors": [
   {
    "domain": "usageLimits",
    "reason": "dailyLimitExceededUnreg",
    "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.",
    "extendedHelp": "https://code.google.com/apis/console"
   }
  ],
  "code": 403,
  "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup."
 }
}
);


Ah hah!  Now we are getting somewhere.  So, in spite of my due diligence in obtaining an api key, it turns out that obtaining an api key has a very limited quota so I had spent it in my previous incantations and the message was clear in saying I should 'sign up'.  So, yesterday, late in the day, I signed up and sent off the form to Google which replied that it might take as much as several days before my request would receive a review.

Late last night, an email came from a chap at Google who enabled my apikey.  I was quite pleased with the quick turn-around and sent him a thank you email.

With that I dispatched directly back to firing off my test.html opening it from my local share.  Much to my pleasant surprise, it dutifully responded with a response:

My test.html to exercise the Blogger JSON api confirms success


So, as you can see above, the issue wasn't my JSON--it was the fact that the apikey was not 'authorized'.  Once enabled it worked flawlessly.

This now opens the door to an array of possibilities for adding addition features, widgets, etc.
Initially, I would like to have a link in the post 'About the Author' box which when clicked will open a browser tab and display the title and link to each article belonging to the Author.  Should be easy right?  That is on today's plate.

Okay then, that is just a very simple JSON api example which I hope might help some of the readers to get motivated to do the same for their website.  -- Dietrich



Enhanced by Zemanta