About Rex Troumbley

Rex is a PhD candidate in the Department of Political Science and Alternative Futures at the University of Hawai'i at Manoa. His dissertation research deals with the politics of taboo language and censorship. His latest research deals with the ways in which automatic filters and algorithmic language controls shape online discourse.

Automating Slanderous Search

Bill Froberg/Flickr

Modern editions of Mary Shelly’s Frankenstein often drop Shelly’s 1818 full title for the celebrated novel, which reads, Frankenstein; or, The Modern Prometheus. The Prometheus legend has several variations, but Shelly’s story draws more upon Aesop’s version of the story, in which Prometheus makes man from clay and water. Prometheus’ creation, which violated the process for how life is naturally made, rebels against him, and Zeus punishes Prometheus for the unintended consequences of his act.

Last week, The Atlantic reported that Google and Bing seem to be autocompleting different stories about Microsoft’s upcoming game console the Xbox One. On Google, a search for “the Xbox One is” returns autocomplete suggestions for “terrible,” “ugly,” and “a joke.” The same search on Bing, Microsoft’s flagship search engine, returns a single autocomplete suggestion: “amazing.” Commentators on the Atlantic article pointed out that dropping the term “the” from the searches would yield different results and should put to rest any conspiratorial thinking that Google is smearing Microsoft’s product using its autocomplete. A search for “xbox one is” on Google is still pretty negative, suggesting “xbox one issues,” “xbox one is bad” and “xbox one is garbage.” However, Bing’s suggestions are even more scathing. When dropping “the,” Bing seems to agree with Google suggesting “xbox one issues” and “xbox one is bad.” However, it also suggests that Xbox One is “terrible,” “going to fail,” “ugly,” “watching you,” “crap,” and “doomed.”

Xbox One isn’t the only product to have an interesting mix of autocomplete suggestions. Autocomplete on Google suggests “google glass is stupid” and “google glass is creepy.” Bing suggests these too, but also suggests that google glass is: “ridiculous,” “[a] terrible idea,” “military tech,” “scary,” and “useless.”

Searching for other tech companies returns interesting results, too. Google suggests that “apple is”: “dying,” “evil,” “dead,” and “doomed,” while Bing suggests the company is: “evil,” “losing its cool,” “losing,” “a cult,” “dead,” and “going downhill.” (On the flip side, Bing’s autocomplete also suggests that Apple is a “good company to work for” and “better than android.”) Meanwhile, Bing’s autocomplete doesn’t seem to be advertising Windows 8 very well, suggesting that “windows 8 is:” “terrible,” “horrible,” “awful,” “slow,” “awesome,” “a disaster,” “great,” and “crap.” These are only a few of the latest examples, but FAIL Blog has a great collection of similar autocomplete fails collected by search engine users.

But what if a person’s name is autocomplete associated with something they might not like? In May 2013, former first lady of Germany Bettina Wulff successfully sued Google for automatically completing search terms entered into the company’s German search engine that associated her name with “escort,” “prostitute,” or “past life.” For years, rumors had been circulating that Wulff had worked as an escort before meeting her husband (and future president) Christian Wulff. Five similar autocomplete cases have been leveled against Google in Germany, most of which involve associations between a person’s name and terms like “fraud” or “bankruptcy,” and before Wulff, Google had won them all.

Before Wulff, Google’s main defense had been, as explained by the company’s Northern Europe spokesperson Kay Oberbeck, that the predictions come from “algorithmically generated result of objective factors, including the popularity of the entered search terms.” Google argued it was not responsible for simply displaying the mass input of users, but even if German courts had agreed, the marketing departments have some serious work ahead of them.

Despite Google’s assertions that “autocomplete predictions are algorithmically determined based on a number of factors (including popularity of search terms) without any human intervention” and that “objective factors” alone drive the suggestions, Google voluntarily and expressly intervenes in autocomplete results to remove hate speech, copyright infringement, and other terms on a country by country basis (for example, searches in German do not show Holocaust denial keywords, but they do appear in searches within the US). While Google had not lost an autocomplete case in Germany before Wulff, it had lost several defamation cases in Japan, Australia, and France.

And what if some clever person figured out how to use autocomplete to their advantage? In 2010, Internet marketing expert Brent Payne paid several assistants to search for “Brent Payne manipulated this.” Not long after, users typing “Brent P” into Google would see Payne’s results in their autocomplete suggestions. When Payne advertised what he had done, Google removed the suggestion.


Payne’s manipulation of Google’s autocomplete and Google’s own reaction should indicate that the algorithms built to guide and direct us through the Web are neither infallible nor incorruptible. In several countries, algorithm creators have been held responsible for the actions of their autocompleting creations. At the same time, decisions to intervene in the operation of algorithms can be viewed as censorship or an abuse of power. Shelly borrowed the term “Modern Prometheus” from Immanuel Kant’s description of Benjamin Franklin’s contemporary experiments with electricity. When the creators of algorithms can be held responsible for the defamation of their creation, by legal institutions or consumers, those creators are forced to accept the successes, limitations, and failures of their experiments with electronic discourse.

Cloud Computing, Cloud Polluting?

courtesy of PayBit.pl

In 2008, Satoshi Nakamoto (a pseudonym) announced plans to build a new electronic currency—totally peer-to-peer and requiring no third party intermediaries—called Bitcoins. In order to get new Bitcoins, users would install programs on their computers called “Bitcoin miners” that would solve complex mathematical puzzles. By making the puzzles difficult and only solvable after some heavy computing, coins would be slowly introduced into the system over time and the coins randomly distributed to users. These mining programs would search for a sequence of data that produces a particular pattern which, when found, gives the miner a small amount of Bitcoins. Simply put, users could make Bitcoins by using their computers’ processing power to solve these puzzles and generate new coins. As of 2009, the number of new Bitcoins has been designed to halve every 4 years until 2140, after which the number of Bitcoins will have reached a maximum of 21 million coins, and no more Bitcoins will be added into circulation.

This system worked well for the first few years, but since Bitcoin mining became widely practiced in 2009, the easy puzzles have been solved, and more processing power has been needed to solve the increasingly harder puzzles. Though there are other ways to obtain Bitcoins (like buying them with other currencies, or trading them for products and services, or through processing fees), mining the coins is still the only way to introduce more coins into the system. As Bitcoin mining requires increasing computing power for diminishing returns, the low-powered computers found in homes and offices are no longer up to the task of virtual mining.

courtesy of Zach Copley/Flickr

In April 2013, Mark Gimein at Bloomberg published an article calling Bitcoin mining an “environmental disaster” that consumes 982 megawatt hours a day, or enough power to run 31,000 US homes. Additionally, the value of Bitcoins is subject to massive fluctuations in the currency trading markets, threats by various governments to shut down the experiment, and hacker attacks on the Bitcoin system. Just three days before the Gimein published his article, Bitcoin values plummeted by 77% after hackers and new users put pressure on the system. A month later, US authorities seized the world’s largest Bitcoin exchange, and earlier this week the IRS declared Bitcoins a taxable income. While Bitcoin has made a few people wealthy, Bitcoin miners are quite literally converting thousands of megawatt hours into virtual currency, the future of which is extremely uncertain. Just like mining for gold in the real-world, mining for virtual coins presents serious political, economic, and environmental issues.

courtesy of Jeff Kubina/Flickr

While Bitcoins may be the one of the most obvious challenges to the virtual-material divide, it may not be the most significant. In September 2012, the New York Times estimated that digital data centers worldwide use about 30 billion watts of electricity (or about the same as the output of 30 nuclear power plants), with the US responsible for about one-third of that usage. According to Google, a single search uses about 0.0003 kWh (1080 joules) of energy, which is roughly the same as turning on a 60W light bulb for 17 seconds. Another estimate found that a 140 character Tweet consumes about 90 joules, which is roughly enough energy to power that same light bulb for 1.4 seconds.

But what about when no one is actively using these services? A McKinsey & Company report estimated that an average data center only uses 6 to 12% of its electricity for computation, while nearly 90% of energy use goes into keeping servers idling in case of a surge in activity that could crash operations. Companies keep their facilities running around the clock at maximum capacity, regardless of demand, because they fear what might happen if their services are interrupted.

Earlier this month, Google hosted a summit at the Googleplex to consider “How green is the Internet?” In his keynote address, energy researcher Jon Koomey estimated that the Internet is probably responsible for about 10% of the world’s total electricity consumption. Koomey, who has been studying the material impact of the Internet since 2000, noted that the numbers are difficult to track, but suggested that companies that have made their names collecting data could do a better job tracking electrical use. Eric Masanet from Northwestern University found this lack of data troubling enough that he launched a publicly available model for assessing the energy effects of cloud computing called CLEER.

Koomey also noted that moving to digital communications and networks has reduced overall electricity use. For example, Koomey argues, businesses and organizations reduce their use of electricity by allowing companies like Google to host their email servers rather than run their own. The subtext of many of the “How green is the Internet?” keynotes was fairly obvious; if you care about the environment, move your data and processing to the cloud. Google made this connection clear when the company posted on its blog about the summit and cited a study (sponsored by Google) from the Lawrence Berkeley National Laboratory that found that migrating all US office workers to the cloud could save up to 87% of IT energy use (or enough to power the city of Los Angeles for a year).

courtesy of Stuart Marsh/Flickr

From an environmental perspective, Google has done its best to make migrating to the cloud attractive. Google is one of the largest investors in renewable energy, has commissioned several wind farms, and uses more efficient cooling towers for its servers than most Internet companies (though currently only 33% of its energy use is renewable). Other companies are investing in clean and green technologies too. Last year Facebook opened a data center in a building designed to make its servers 40% more energy efficient, and this year it opened a data center in Sweden that completely runs on hydropower. Apple states that its data centers are completely powered by solar, wind, hydro and geothermal energy. Microsoft has pledged to become carbon neutral in 2013 and earned its place on the Environmental Protection Agency’s 2013 list of the top 10 renewable energy-using organizations in the US along with Intel, Starbucks, Wal-Mart, and Lockheed Martin.

courtesy of Kevin Saff/Flickr

Recently, several cloud computing companies like Cloud Hashing have begun offering services that allow users to use outsource their Bitcoin mining processes to their cloud servers. Bitcoin mining isn’t the only service being migrated to the cloud. Last year, Adobe announced its decision to begin offering its Creative Suite of products, like the popular Photoshop and Illustrator, on its cloud service exclusively. Adobe reported this week that 700,000 users have begun using their “Creative Cloud” service and hopes to have 4 million users by 2015. Adobe’s decision to offer its services via the cloud was primarily motivated by its need to combat piracy of its software and also to roll out updates quicker, not necessarily because the company wanted to decrease end user energy consumption. Other data gathering companies have a stronger interest in collecting, storing, and mining user data. Last week, Google caused some controversy and user confusion after completing an update to its mobile Gmail app, making archive (rather than delete) the default setting for mobile users. Google didn’t remove the delete option—it’s still available through menu actions—but the company is clearly nudging users away from deleting emails. Much like mining Bitcoins, mining user data or letting users search archived messages requires sifting through massive amounts of data looking for particular patterns or text.

courtesy of Peter Patau/Flickr

While Adobe’s new cloud services might use less energy than what individual computers running the software requires, and companies like Apple and Google are moving to renewable energies, the lack of energy usage transparency prevents users from knowing the actual costs of cloud computing. Many people go out of their way to turn off the lights when they leave a room or recycle soda cans, but become angry when a site loads slowly or they can’t instantly find an email archived four years ago in Gmail. The data centers that store and process old emails and tweets already use more than 2% of the US electricity supply (more than the notoriously energy demanding paper industry). When one considers how much energy is involved in Internet use, “the cloud” rapidly comes down to Earth.

#imweekly: June 24, 2013

The Tunisian Internet Agency building, the center of the former Tunisian regime’s Internet censorship facuilities and once a home of the former dictator Ben Ali, is being changed into a hackerspace and open wifi hotspot for nearby citizens.  Plans are in the works to extend the range of the building’s routers to allow Internet sharing with more of the population.

A recent report from the Citizen Lab found that Pakistan is using Netsweeper, a filtering technology managed by a Canadian company, to block websites or tamper with DNS. The Pakistani government is planning to block more URLs and SMS text messages in the country. However, five international companies who sell surveillance and filtering software have committed not to help Pakistan, after protests from civil rights groups.

Human rights activists have filed a request to investigate the use of the FinFisher surveillance software by the Mexican government, which they suspect has been used to spy on journalists and activists in the country. A Citizen Lab report detailing FinFisher’s use in 36 countries was the spark that prompted the investigation. Drug-related violence in the country may have allowed the government to launch several surveillance programs without significant resistance from civil society.

United States
Facebook announced it has fixed a bug on Friday which has potentially leaked 600 million users’ email addresses and phone numbers. The bug allowed users downloading an archive of their user account to also download other users’ information. Security researchers have also discovered that Facebook is collecting data on people without a Facebook account and also has been keeping a shadow profile of every user that includes information not shared by the users directly with Facebook. The bug had been active for the past year, though Facebook says it has no evidence that the bug was exploited maliciously.

#imweekly is a regular round-up of news about Internet content controls and activity around the world. To subscribe via RSS, click here.

Social Network Alternatives

Courtesy of AJ Cann/Flickr

In May 2013, Facebook announced that it had 1.1 billion users, 665 million of which were active on the site each day. The three major global social networks (Facebook, Google+, and Twitter) have all experienced huge growth in the last few years. According to the GlobalWebIndex, of the global Internet population approximately 51% use Facebook on a monthly basis, 25% use Google+, and 21% use Twitter. Despite the rapid growth of these social networks, many users have become dissatisfied with their business models, political practices, constantly changing posting policies, and undemocratic forms of governance. Aside from the concerns over PRISM, Facebook has recently drawn attention for its blocking pictures of breastfeeding mothers or the company’s handling of rape joke memes spreading through their network. Activists and political dissidents in particular have found these social media sites stifling and sometimes dangerous, but often find themselves with few alternatives to spread their messages online.

As a result, several interesting social media alternatives have recently been created to address these concerns and protect both privacy and dissent online. While many social network projects have launched over the past few years, few alternatives remain in active development. Below is a curated list of the best current alternatives for people with moderate computers skills concerned with privacy, control of their information, and networking outside the control of governments or corporations.

Diaspora: This nonprofit, user-owned, social network consists of a group of independently owned “pods” that interoperate to form the network. Since its launch in 2010 by four students at New York University’s Courant Institute of Mathematical Sciences, Diaspora has been one of the most popular alternative social media sites. As of June 2013, Diaspora reports it has 405,551 registered accounts (which includes users on the main pod and connected people from other pods) and 2,270,599 estimated users on the most popular pod (estimated because that information is not public) participating in this distributed social network. Diaspora allows for pseudonyms, ensures users own their content, and because the network is distributed by other users who install the freeware and setup web servers, the network cannot easily be disrupted or its users surveilled.

App.net: In July 2012, this platform evolved beyond being a place for developers to showcase new apps and became a full-fledged social network. The design of App.net is fairly similar to Twitter, but with one big difference. Instead of selling user data to advertisers, the site requires users and developers to pay subscription fees for premium accounts ($5 monthly, $36 yearly, or $100 a year for developers). There are no ads on App.net, but more importantly for social activists and people concerned with controlling their data, App.net will only share information with third party vendors the service needs to work (like payment processors for accounts) and law enforcement (if proper legal channels are observed). When a user deletes something from App.net, the company makes sure it’s gone from their servers within two weeks. It’s not a completely private social network, but it’s close.

Tent.io: This Twitter-like (but not Twitter-clone) alternative offers many of the same advantages App.net boasts, but rests on an entirely different method for distributing information. Tent is an open Internet protocol, like email or TCP/IP, that can be used to run a Tent server (via Tent.is) or connect several social networks together. Tent.is offers users the ability to run their own server, lets them share anything, and is designed to help users migrate from other social medias. Tent can also be run as a Tor hidden service, which can allow activists to communicate without being traced, and because Tent is decentralized, it cannot be blocked the same way Twitter has been in several countries. Tent also touts itself as a better alternative even to email since users can change their address and the followers come with them. Tent also argues it fosters innovation since applications can be developed for Tent without needing to ask for permission from the protocol’s owners. The Tent protocol can be used with Tent.is or be used independently to grow other networks. Tent is a bit more technical than the other alternatives featured here, but its flexibility and expandability mean it’s likely to continue developing.

GlassBoard: Featuring a very simple and comprehensive privacy policy, GlassBoard is probably the easiest to use of the alternative social networks featured here. GlassBoard’s innovation in social networking is to make money by charging a small user fee rather than sell information to advertisers. Perhaps because they know many people have indicated they would not pay for access to a social network, even if it meant more control over their information, GlassBoard does offer a free account with some limitations, and premium accounts with more storage or access to APIs. GlassBoard also offers iPhone and Android apps, all user data is encrypted on GlassBoard’s servers, and they won’t sell your personal information for targeting ads. GlassBoard does not have privacy settings. Instead, everything a user does on the service is private and can only be seen by people they approve. While GlassBoard primarily focuses on providing businesses with a private communications network, anyone willing to have some storage limitations or pay a bit for a premium account can enjoy a very simple, secure, and mobile social network.

Identi.ca: Identi.ca is another micro-blogging service similar to Twitter, but offers many features Twitter does not such as XMPP support and the ability to freely export personal and “friend” data. Identi.ca enjoyed early success when more than 8,000 people registered for the service within the first 24 hours of its public launch in July 2008. For those concerned with controlling their information, Identi.ca will publish all posts under the Creative Commons Attribution 3.0 license by default, but paying customers have to option to choose a different license. In June 2013, Identi.ca began migrating to the pump.io software platform in order to offer more features, and its development is likely to continue. Setting up the free and open source software on a server requires a bit more technical skill than most of the other alternatives presented here, and joining might be delayed until the migration to pump.io finishes, but this open source social network is worth watching.

For those not ready to completely abandon Facebook, Twitter, or Google+ there are still a few options for managing how user data is used. Two good browswer add-ons for determining exactly where your data is going are Collusion for Mozilla Firefox and PrivacyScore for Google Chrome.  To keep Facebook, Google, and Twitter from tracking you (and to speed up your browser), the Disconnect extension works well with Firefox, Chrome, and Safari. Finally, to opt out of other advertising that tracks users, the Network Advertising Initiative’s website will show who is tracking a user’s browser and how to disable it.