NSA: Please Turn off the Lights When You Leave. Nothing to See Here.

Linux Advocate Dietrich Schmitz shows how the general public can take action to truly protect their privacy using GnuPG with Evolution email. Read the details.

Mailvelope for Chrome: PGP Encrypted Email Made Easy

Linux Advocate Dietrich Schmitz officially endorses what he deems is a truly secure, easy to use PGP email encryption program. Read the details.

Step off Microsoft's License Treadmill to FOSS Linux

Linux Advocate Dietrich Schmitz reminds CIOs that XP Desktops destined for MS end of life support can be reprovisioned with FOSS Linux to run like brand new. Read how.

Bitcoin is NOT Money -- it's a Commodity

Linux Advocate shares news that the U.S. Treasury will treat Bitcoin as a Commodity 'Investment'. Read the details.

Google Drive Gets a Failing Grade on Privacy Protection

Linux Advocate Dietrich Schmitz puts out a public service privacy warning. Google Drive gets a failing grade on protecting your privacy.

Email: A Fundamentally Broken System

Email needs an overhaul. Privacy must be integrated.

Opinion

Cookie Cutter Distros Don't Cut It

Opinion

The 'Linux Inside' Stigma - It's real and it's a problem.

U.S. Patent and Trademark Office Turn a Deaf Ear

Linux Advocate Dietrich Schmitz reminds readers of a long ago failed petition by Mathematician Prof. Donald Knuth for stopping issuance of Software Patents.

Sunday, March 31, 2013

Google Chrome Tricked Out

by Dietrich Schmitz

Alright.  I've been a long-time Mozilla and Mozilla Firefox devotee, but, the time came and went (months ago) when I decided that the value of using Chrome exceeded anything that Firefox could muster.

Today, I share with you some of the things which will improve your Chrome experience.

Graphics Hardware Acceleration

It has become popular for browsers to support 3D graphics--that is when a graphics cpu is detected, the browser will send work to the Graphics Processing Unit (GPU) vs. letting your Central Processing Unit (CPU) do all of the heavy lifting.  This greatly speeds up screen writes to the canvas.

Checking your system's support is simple enough:

Override software rendering list (blacklist)

Now, restart your browser and type into the omnibar: chrome://gpu

It should look like this:


All green is good.  That last one isn't supported yet by the graphics driver on my Netbook so otherwise we are now 'firing on all cylinders' and the GPU will handle graphics primitive calls instead of the CPU now.  Good deal.

Security Sandbox



With Linux, you know that security is being taken seriously.  In fact, now as of Linux Kernel 3.5, support for seccomp-bpf is baked right into the kernel.  Any application can take advantage of it and the good news is Chrome will see and use this security sandbox feature.  To check if your Chrome is sandboxed type chrome://sandbox and you should see this:



This is a good thing.  Be sure it is set on your Linux system.  Microsoft Windows does not have it.

Google Chrome Extensions

Visiting the Google Chrome Web Store is a right of passage for every newcomer to Chrome.  It's exciting and you'll find a vast array of Apps, Extensions, and Themes a click away from adding to your browser.

Here are a few that I recommend every user consider adding to Chrome.


The above extensions will automatically appear on your Chrome toolbar (right of the Omnibar)

In addition, there are some Apps which only appear in your 'New Tab' Chrome menu that I recommend.


Don't get carried away.  Remember each extension is going to consume some memory.  The above are what I use day to day and minimally really provide some good features and value.

I hope you enjoy Google Chrome and get the most out of your browsing experience.

-- Dietrich


Enhanced by Zemanta

Debian: A SpeedBump on the Road to Innovation

by Dietrich Schmitz

I've watched the progress of Linux over quite some time and can't help but conclude that development in the Debian community has become dogmatic, stodgy, and held back for no other reason than pure politics and control issues.

There is nothing creative or innovative about that.  The Debian priesthood make their proclamations, anoint new members and continue to exert control of the pace of development with no interest in changing their software release management policy speed knob, currently set to: slow.  Slow as in 'sloth' slow.

As Debian prefers to resist change, change besets them.  You see, change is occurring on Internet time all around them in fact.  That's fast for those who watch trends in application development like me.  And the constant that is ever-present that Debian cannot ignore is that change will continue in overdrive while the Debian community sit on their collective hands satisfied by not doing what needs to be done.

Innovating on Internet time doesn't mean one has to lose control, nor does it mean that one will lose stability at the risk of frequent change.  It means that the pulse is being followed and striking while the iron is hot is imperative to reach profitability and bring product to market when and where it's needed--today and now.

So, life goes on, with or without Debian.  They have made their bed and sleep in it.  The need to politic continues unabated and the camp has their wagons drawn into a circle as if to prepare for attack.

It is fractious, unnecessary, and drives a wedge into the process of community sharing.  Sharing of ideas, sharing of resources is divided along political lines.  It is divisive and leads to internal turmoil all avoidable but the control must continue.  And it does.

Debian leadership will continue to apply their full will with impunity and seek comfort in their ability to exert only control, not innovation, not creativity, not sharing--just pragmatic politics--and continue to recede into irrelevance.  Happy Easter.

-- Dietrich
Enhanced by Zemanta

Saturday, March 30, 2013

YUM vs. APT: Which is Best?

by Dietrich Schmitz


Nothing stirs more debate with fellow Linux enthusiasts than their package manager.

It's a passionately contested issue.  Which is better, YUM or APT?

You'll be surprised at the answers and it's really interesting to see what people think of each.

Yellowdog Updater Modified


Yellowdog Updater Modified, or YUM, is a complete rewrite of YUP, Yellowdog Updater.

Yum has evolved to support the Red Hat Package Manager, RPM and is used by Red Hat, Fedora, Fuduntu, CentOS, and other RPM Repo-based Distros.

Here are some of the noteworthy features of Yum:


  • XML Repository format
  • Automatic Metadata syncing
  • A Plug-in Module system to make Yum extensible
  • GUI Front-Ends, including PackageKit and Yum-Extender
Subtle differences between Apt and Yum include that Yum automatically updates a local metadata database stored in sqlite on your machine.  This happens silently in the background at interval, so there isn't any need to perform an update before upgrade such as is the case with Apt-Get.  With APT, a local cache stores metadata text file information but on each use, one must run sudo apt-get update before running sudo apt-get upgrade to receive needed updates.

Also, Yum checks all dependencies before starting the download process.  This may seem inefficient at first, but, on a large package download, it will be a big time saver as apt-get will sometimes fail on a dependancy requiring a restart with 'sudo apt-get install -f' to continue retrieving the missing dependant packages.

Yum-presto, one of many plugins written in Python, extends yum to add new features, in this case the ability to utilize deltarpms which contain only the diffs between one version of an rpm and its newer version.  This can speed downloading and installation significantly.

Yum-plugin-fastestmirror, as the name implies, is a plugin which will determine the best mirror (closest to your location) for obtaining your package(s).

Also, Yum will, when one mirror times out, switch over to another mirror to continue your download.  Apt-get will fail and require a restart of your upgrade.  There are many tutorials that can help with using Yum, here's one on Red Hat.

In a recent G+ Linux Advocates community discussion, +Norbert Varzariu wrote:

You need to install some plugins for yum to make it really kick ass:

aliases, changelog, fastestmirror, keys, langpacks, list-data, presto, protectbase, rpm-warm-cache, security, verify
to name a few. If you use btrfs, the fs-snapshot plugin is another great one.

fs-snapshot will allow a 'roll back' to a restore point before an update was performed.

Also, for users of yum-presto, deltarpms download first and then compile locally.  This is a compute-intensive task and multi-core cpu support was added so users should notice a significant speed increase in that process.

Advanced Packaging Tool


Advanced Packaging Tool, APT is a tool used to support Debian-based Distros, including Ubuntu.  Probably one of the biggest pluses going for APT is not the tool itself, but the vast repository of software applications.  It is often referred to as the 'de facto' package manager standard because the majority of websites offering software will automatically package in Deb format.

Popular as APT may be it has many good things going for it that make justifying its use persuasive.

The term APT doesn't refer to a singular program but a series of apt-* tools used in conjunction with downloading updates and new programs.

Update, Upgrade and Dist-Upgrade

Update is used to synchronize packages from their index source.  Upgrade is used to upgrade packages to their newest version, when made available.  Dist-Upgrade does upgrade and some intelligent upgrading decisions such as when a kernel update needs to be performed.  It will autonomously choose to perform higher priority upgrades first that have higher dependencies before doing others that depend on them.  Dist-Upgrade is often taken when performing a system point upgrade such as when a new Distro release comes out, e.g., Ubuntu 12.04 to 12.10, will perform an in-place upgrade of your system.

Several popular APT front-end GUIs are available, including the Ubuntu Software Center, Synaptics Package Manager, aptitude, KPackage, Adept Package Manager, GDebi and PackageKit.

Conclusion

I'm going to get a lot of flack for saying so, but I feel in having spent sufficient time with Fuduntu, a rolling release, that YUM is the better of the two package managers.  Truthfully, I have spent many years living with rpm, apt, yum and will say as popular as deb package management is, it really has nothing else going for it over YUM.

When I first began using Fuduntu, I thought I would miss Debian apt-get, but I didn't.  Not one bit.

I will also point out that Yum by default, using rpm, is Linux Standard Base compliant.

So, how do you feel about it?  Which package manager do you prefer?  Give us your feedback.

-- Dietrich






Enhanced by Zemanta

Standardization as a Road to More Choice

by Guest Writer Michael Glasser


If you are reading this article from a Desktop Linux system take a minute and look at a few programs on your system: maybe your web browser, your primary word processor, your email program, and whatever else you use often. Look at the save and print dialogs, the term used to Quit (or Exit) a program, the terms used for Options or Settings and where such options are located and what hot keys are used. Chances are you will find much inconsistency in the programs you look at. Even within single programs there is often a great deal of inconsistency. 

Is this a problem? Many users say it is not; they believe they “get used to” each program and speak of how it is not hard to figure out the multiple styles found on their systems. Even if they do not realize it, though, inconsistencies do lead to problems – they lead to reduced productivity and efficiency and increase in user errors. This is backed up by HCI/UI research, is accepted by pretty much every relevant expert, and such ideas have been expressed by the teams that produce KDE, Gnome, Ubuntu, FireFox, and many other open source projects. Having a distro that works as a unified system is important if you care about the work you do.

What I would like to see – what would benefit desktop Linux users – would be a way for distro developers and users to get more choice than they do now. Imagine if Ubuntu had  minimalistic styles for their dialogs designed for great ease of use while PCLOS had more robust dialogs that allowed for renaming and deleting of files from the dialogs. Novice users would be able to trust that “settings” for their system were always in the same place and could look for the same term; more advanced users would know the hot key to get to the same feature. If the users did not like the choices made by the distro developers, they could swap these things out on their own, and their choices would affect their entire system because developers had accepted whatever standards KDE, Gnome, and others had agreed on.

More choice. Greater productivity. Improved efficiency. Reduced Errors. Nothing is lost.

I am not going to pretend this would be easy or perfect. Nor would I want this forced on distro managers or users or developers (and there are good reasons in each of those groups why they might sometimes wish to go outside of the standards). It should be a choice. 

Having such choice is something I have been seeking and predicting for many years, and looking at Kubuntu or PCLOS (or many other distros) of even three or four years ago compared with what they offer today we see that they do act more and more like complete systems and not like a bunch of unrelated software jumbled together... what a customer of mine said felt like a system held together with duct tape that he was afraid would fall apart at any moment (even though he never had a system crash and had no more application crashes than on his old OS, maybe even fewer).

The open source community is getting better and better at allowing users to have distros act like unified systems. There is, however, much room for improvement. I am not a programmer. I would love to hear ideas on how to help move the open source ecosystem to better allow for this increase in choice.

-- Michael Glasser


Enhanced by Zemanta

Friday, March 29, 2013

On Deaf Ears

by Dietrich Schmitz

Today the Electronic Frontier Foundation had a post entitled Texas Court Confirms You Can’t Patent Math | Electronic Frontier Foundation.  It is rather incredulous that it takes a state circuit court to determine a judgment such as this.  The story's first paragraph opens with:

In a victory for open source and common sense, a federal judge has thrown out a patent suit against the Linux-based operating system on the grounds that the patent claims a mathematical algorithm. The case is encouraging both for the result and because the judge ruled at the beginning of the case on a motion to dismiss. This means that the defendant didnt have to waste a fortune fighting this bad patent. We hope the case will be a model for future litigation involving abstract software patents.

A victory indeed but it's really not groundbreaking.  Surely, it is abusive and shows just how 'aggressive' patent trolls are and to what extent they will go to litigate baseless software patent claims. Luckily for the defendant, the judge ruled early on before the case got under way and was saved a substantial sum in litigation fees for fighting a frivolous lawsuit.

Is this the end of these kinds of lawsuits?  I am afraid not.  In fact, the Federal decision which originally introduced software as patentable began in the early nineties.  It was fought hard to keep from happening and one such attempt to petition the U.S. Patent and Trademark Office was made by none other than Donald Knuth.  From his wikipedia biography:


Donald Ervin Knuth (pron.: /kəˈnθ/[1] kə-nooth; born January 10, 1938) is a computer scientist and Professor Emeritus at Stanford University.[2] 
He is the author of the seminal multi-volume work The Art of Computer Programming.[3]Knuth has been called the "father" of the analysis of algorithms. He contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. In the process he also popularized the asymptotic notation
In addition to fundamental contributions in several branches of theoretical computer science, Knuth is the creator of the TeX computer typesetting system, the relatedMETAFONT font definition language and rendering system, and the Computer Modernfamily of typefaces. 
As a writer and scholar,[4] Knuth created the WEB and CWEB computer programming systems designed to encourage and facilitate literate programming, and designed theMIX/MMIX instruction set architectures
As an important member of the academic and scientific community Professor Donald Knuth is strongly opposed to the policy of granting software patents.[5] He has expressed his disagreement directly to the patent offices of the United States and Europe.
From a Groklaw.net story written by Pamela Jones in 2009 is shown below the original letter to the U.S. Patent and Trademark Office, dated February 23, 1994.  The letter makes a persuasive argument for why software should not be patentable.  Pamela Jones writes:

If only they had listened to him then! And what a mess the US patent system has become, because they did not listen. Hopefully, Europe will not make the same mistake. You can find the other amicus briefs and letters submitted to the EPO here, and I'll be publishing several of them here on Groklaw in time, to show more reasons why software patents are viewed as so harmful by programmers, those most directly impacted by whatever decision the EPO's Enlarged Board of Appeal reaches.

Donald Knuth puts forward common sense logic that if computer software is built upon algorithms then it must be mathematically based and therefore cannot be patented.  A passage from his letter:

I am told that the courts are trying to make a distinction between mathematical algorithms and nonmathematical algorithms. To a computer scientist, this makes no sense, because every algorithm is as mathematical as anything could be.  An algorithm is an abstract concept unrelated to physical laws of the Universe.
His reasoned protestations continue for two pages.  So, when I say I am incredulous at today's story it's only because this has been long ago addressed but totally ignored and came from probably the brightest and most respected computer scientist and mathematician in the world.

On Deaf Ears.

-- Dietrich




Knuth Letter Page 1  


Knuth Letter Page 2  


Enhanced by Zemanta

The Linux Desktop User Experience

by Dietrich Schmitz

The other day I wrote The Linux Desktop Mess ~ Linux Advocates and had a few interesting comments.

Some of the comments were Pavlovian and entirely missed the point of my story, as if I am disregarding open source choice by suggesting standardization.

Quite the opposite.  I cherish choice as much as anyone else.

The point of the story, missed by some, is that programmers follow standards and protocols all the time, some are elective and some are mandatory.

For example, take TCP/IP.  On a network, not much would happen without it.
Nor, for that matter, would anything happen if there wasn't DNS, SSL, SSH, ARP, BSD sockets, SIP, GnuPG, Standard C or POSIX, to name just a few protocols.

I submit that there is a difference between a desire to innovate to achieve a realized improvement and the desire to take 'shortcuts' so as to 'avoid' obstacles or perceived unnecessary work.

That sentence has the word 'perceived' in it.  From a programming perspective the work may be real and required or perceived as being unnecessary.  If by chance the programmer elects to 'avoid' a volume of coding (required or unnecessary) to reduce his/her workload, that introduces the possibility for variation and not necessarily for 'the better'.

I've coined the Linux Desktop User Experience (UX) to cover the all-encompassing experience a user has from opening a newly purchased Linux system, unboxing, setting up, signing in to ongoing UX core issues.

One commentor, Mark Wiygul, on the 'Mess' post wrote:

The big distributions should get together with bi-annual ad-hoc committees to form a common, agreed upon core user experience of the most basic desktop user functions.. and then when the distributions implement one of those functions, do it in a standardized way, AND with a common user trademarked "linux-desktop" logo besides it, letting the user know that when they see function implemented on any "linux-desktop" that its implemented the same way. For instance, define a common "linux-desktop" file explorer in the bi-annual committee. IF the linux-desktop uses the "linux-desktop file explorer" then it has the common logo that users know.. "ahhhaaa!! I know how to use this explorer. If the committee defines BASH as the common terminal, put a trademarked logo besides it letting the user know.. "ahhaaa!! I know how to use this terminal window". Every two years define what the linux-desktop file explorer, web brower, notepad, spreadsheet, word processor, paint program, start menu, search button, etc look and act like. If the distribution is a "linux-desktop" then the user knows that she can look for the common logos and get common experiences. If the linux-desktop doesn't implement the defined standard, they don't use the defined logo for say, there unique word processor. If the distribution doesn't want to be a "linux-desktop" then fine, it can do whatever it likes how it likes it. This increases consistency between COMPETING distributions without destroying the competition between Distributions that linux users love.

Is it far-fetched to think that arriving at a set of common core user experience criteria would be a good thing?

And, if that set of common core criteria provides a net benefit to OEMs in terms of certification and reducing support issues and improves the overall user experience, how is that a bad thing?  Isn't this already happening with Google's Chromebook?  The OEMs love it because it is certifiable, reliable, hard to break, and safe, all minimizing potential returns and ongoing costly support issues.

I am in agreement with his comments and do feel that a singular FHS and package management system would greatly reduce the burden of implementation and certification of application software for Linux.

And, I would add that it would not turn Linux on its head and would not impinge on choice and having Distros labelled LUX-certified would be in the developers best interest to comply with.

Computers purchased with a (much like a Windows medallion for certification) LUX-certified badge of common criteria would enable prospective buyers to safely assume that even if the machine is using Distro X (vs. A, B, C, ....), they can be to some extent familiar with the operation of said equipment.

It seems to me that this is a 'good thing'.  Your thoughts?

-- Dietrich

Enhanced by Zemanta

Software Patents: Incompatible With and Antithetical to GNU/Linux


by Dr. Roy Schestowitz

When patent monopoly extends beyond physical devices to the realms of costless copies of copies Over the years I have composed several thousands of posts about software patents. A lot of GNU and Linux advocates lose sight of what clearly became the #1 impediment to adoption of platforms such as Android and webOS. Patents are harming these Linux-powered platforms in ways that are scarcely understood by the outside world because a lot of bickering over patents happens behind the scenes.

Notable among the secret deals was the 2006 Microsoft/Novell deal and prior to that a deal with Sun Microsystems. The goal is to impose barriers on the distribution as Free (as in freedom) software. Merriam-Webster defines antithetical as "being in direct and unequivocal opposition," which is exactly what software patents are to copyleft-based software -- software on which the only restriction is that sharing should require modified distributed copies to be made available using the same licence, hence ensuring the preservation or endurance of a program's freedom.

When copyright (or copyleft) are further encumbered by abstract notions of ownership/monopoly such as patents, the same principles no longer apply. One can, for instance, distribute copyrighted code without restriction on the number of copies made, but when a per-unit patent licence is introduced, distribution of a program is impeded.

No Open Source licence provides a one-size-fits-all solution to this cleverly-crafted riddle or discriminatory-by-design maze. Without delving into the reason software patents -- like several other classes of patents (e.g. genetics, business methods) -- are outrageous, impractical to enforce, and economically unsound, let us recognise that trying to pretend Free/Open Source software (FOSS) is compatible with patents (as proprietary software lobbyists like to do) is worse than deceitful; it is malicious.

Software patents are basically what an insipid mind would conceive as an evil plot to kill FOSS at a litigious level. The strategy which more recently embedded software patents and FOSS-hostile stings in policies (law) is RAND or FRAND -- basically the vain contention that it is "fair" and "reasonable" to tax FOSS (i.e. software) based on patents (i.e. software patents), even in places like China and in continents like Europe where software patents "as such" (open to interpretation due to ambiguity) are not legal. More GNU/Linux advocates should pay attention to the debate about software patents. It's not just a question of cost but a question of market viability. It determines if proprietary legacy becomes the financial leech on FOSS.

Professor Donald Knuth, a notable computer scientist and perhaps the leading algorithms guru, once wrote: “I find a considerable anxiety throughout the community of practising computer scientists that decisions by the patent courts and the Patent and Trademark Office are making life much more difficult for programmers.”

The issue of software patents is not just a problem to FOSS developers but to all developers. This is why proprietary backers of Linux too should join the debate and rid the world of software patents. This should include companies like IBM, which -- contrary to common belief -- is strongly in favour of software patenting, still.

  - Dr. Roy Schestowitz
Enhanced by Zemanta

Thursday, March 28, 2013

Linux Advocates Traffic Tops 15,000 Pageviews

by Dietrich Schmitz

Riding on a huge wave of referral traffic from LXer.com which posted the Fuduntu: Back to Fundamentals, Gets it Right ~ Linux Advocates story combined with the good news story Linux Foundation Becomes Sponsor ~ Linux Advocates, traffic rose today to a new record high of over 15,000 pageviews for the day.

It has been +Katherine Noyes' and my goal to build good content here that readers value and to see the crest come through is quite encouraging as we continue our journey in constructing and extending this website further.

We hope readers derive value from what they find here and we will redouble our efforts to ensure the standard of quality writing receives continuous improvement.

On behalf of Katherine and myself, thank you for your continued support.

-- Dietrich




Enhanced by Zemanta

Why Advocacy of Linux Must Not Tolerate Censorship

by Dr. Roy Schestowitz

When  the GNU/Linux system was created 30 years ago it was motivated by the belief that -- as much as we may wish to control others -- in order to guarantee everyone's individual freedom we must decentralise and mitigate/neutralise remote control. Isolation between users and developers was annulled. Every user was capable of doing what a developer could. This clever 'hacking' of unwanted relationships between users and developers, such as the exclusivity in access to code, was removed in the licence sense.

Linux thrived owing to the adoption of licensing requirements that assure each and every contributor will retain full control over the entire system, dependants included. It sure is a motivator for many who work for FOSS-centric companies. It's a recruitment tool, too.

If code is law, as Professor Lessig put it, then using code we can control behaviour too. If we are to honour the same principles that motivated the GNU/Linux system, then we must reject the notion of censorship, no matter the platform. Not every commit -- so to speak -- needs to be accepted upstream, but its existence should be allowed and its integrity honoured. Free software is about a diversity of practices, not about imposition from above or the direct and at times explicit coercion of one over another.


Over the years I have come across thin-skinned people that excuse their practice of censorship by calling those whose opinions they do not agree with "trolls", or some of those equally insulting terms like "shills". This labelling is being used suppress comments or writers -- an issue I am familiar with as a former writer for some online news sites. My experience there taught me the role played by editors to whom controversial but otherwise truthful statement are too 'hot' to publish. This is how ideas get silently killed, or spiked. It manufactures the habit of self censorship -- an unnecessary restraint which limits one's scope of thinking.

Speaking for myself, I never deleted comments or suppressed replies, even though many of them (thousands among tens of thousands) included insults and sometimes libel. We must learn to tolerate opposing views and even disruption. That is what freedom is about.

In order to stay true to the standards of Linux and GNU as successful, leading projects that respect and harbour all voices we must stay true to the same principles that made Free software thrive. Failing to do so would lead us down the path of many failed projects which -- unlike Linux -- no longer attract volunteer contributors (who at times, in due course, found way to get paid for it as well).

Advocacy which is hinged on amplifying oneself while silencing the rest is not advocacy, it is marketing. And marketing is almost antithetical to what science-driven programming strives to achieve. In science, bad ideas die based on their merit, or lack thereof. The messengers earn or lose credibility based on their words. Let bad commentary die based on readers' assessments. Don't suppress it at an editorial level as that would project weakness, not strength.



- Dr. Roy Schestowitz
Enhanced by Zemanta