Sunday, May 12, 2019

The Death of Moore's Law Is Changing The World

Moore's Law is dead. The effect of this event are far more significant than is commonly acknowledged, and has global implications that will fundamentally change our society. The rise of China as a superpower, the end of America's global dominance, rising inequality, and growing populist resentment can all be traced back to Moore's law. Looking even further ahead, the death of Moore's law may suggest that we are entering an extended period of extremely slow human development, a neo-dark age. Whether this period lasts 10 years, 100 years, 1000 years is anyone's guess, and is unpredictable.

This dire prediction is a large leap, one that many are hesitant to make. But it follows from a careful understanding of the ramifications of Moore's law and how it has been baked into almost every facet of society.

What is Moore's Law
Moore's Law is not a law at all. It was an observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors on a computer chip doubles every 18 months. The number of transistors on a computer chip is loosely correlated to the speed of a chip. A computer chip with double the number of transistors, can often do roughly twice as much as the previous chip.

Thus, Moore's Law implied that computer performance would grow exponentially. Roughly every 18 months, computer chips would double in performance. Moore did not provide much reason why this was the case, he just noticed an existing trend.

What made this Law famous and well known, is that for 50 years, it turned out to be true. For decades since this "Law" or more precisely "prediction" was made, computer chip performance has indeed improved exponentially.


For the last 50 years, this prediction has been working, but now it is
Many suspected that this doubling every 18 months could not continue indefinitely.

What is Exponential Growth
Exponential growth is something increases in size as a percentage of its current size. These curves grow exceedingly quickly, and while it's possible that they can exist over a certain period of time, they are inherently unsustainable. One of the better known examples of the extreme growth of exponential curves was illustrated by a 1000 year old wheat and chessboard problem.

Many centuries ago, a chess player challenged a king at a game of chess. If the challenger won, all he would ask for is a single grain of race on the day that he won, and that every day after he would get double the previous day, for as many days as there are on a chess board. The first day, he would get a single grain, the second day he would receive 2, the third day 4, the fourth day 8, and so one. The King, rich by controlling large fields rice, laughed at the challenger, believing that the he could easily pay such a wager.

As the story goes, the challenger won, and the King was obliged to pay. The first handful of days, paying the challenger was a triviality. However, the King foolishly underestimated the power of exponential growth, doubling over a fixed period of time. After just a few weeks, those handful of grains would turn into thousands of tons of rice. After the full 64 days, it word turn many billions of tons of rice, bankrupting the king.

The moral of this well known story is two fold: First, exponential growth is exceedingly fast, faster than most people can intuitively comprehend. The second, is that it is unsustainable. Had the wager continued for 100 days instead of just 64, the King would need to give the challenger enough rice to equal the weight of the entire planet earth.

Exponential Growth in Computer Chips is Not Just about Computer Chips
Anyone who has lived through the years 1960 to roughly 2010 has experienced exponential growth. For many decades, society has marveled at the speed of innovation. Though everyone would grumble about having to replace their computer ever two years because their previous one had become obsolete, these complaints had an element of pride that technology was improving rapidly.

However, the exponential growth in computing power through the latter half of the 20th century was not just about appeasing computer enthusiasts marveling at the wonders of technology, it had significant real-world impact.

Computers have fundamentally changed how companies do business. Obvious examples are spreadsheet applications, which fundamentally changed accounting. However, computers effected every industry in existence. Airlines could become far more efficient by finding ways to pack more people on to a plane. Customer support operators could handle far more requests by more efficiently routing questions. Farmers could better predict crop outcomes using more advanced computer intensive weather models. Brands could create global reach almost instantly, in what would have previously been a major labor intensive effort. 

In short, computer chips affected almost every facet of commercial business. A single person could do much more with a computer in front of them than on their own, they could be far more efficient. 

Why Efficiency Matters
A common measure of the power and wealth of a country is its Gross Domestic Product. One way of calculating a country's GDP is multiplying the number of people in a country with the average amount that they produce, often approximated by their take-home pay.

Thus, a country's GDP can be increased by either increasing the population, or by causing each person to produce more on average. Short term variations, i.e., 10 years, can occur due to cyclical credit cycles, but averaged out over decades, productivity and population largely determine GDP.

The growth of the U.S. economy in the latter half of the 20th century is largely attributed to these two factors: the population increased, and the output per person also increased.

There aren't too many ways to make an individual output more. Sure, they can either work longer hours, but more commonly, they can use tools that allow them to do more in the same amount of time. A mechanic with a power drill can get things done much faster than one with a screw driver. An accountant with a spreadsheet can work much more quickly than one using pencil and paper.

What has Historically Driven Increased Efficiency
Two events have dramatically improved worker productivity over the last 150 years: the industrial  and information revolution. Trains allowed the same number of humans to move far more goods than guides on horseback. Instant global communications have allowed businesses to react far more quickly than relying on postal mail.

Both the industrial revolution, which occurred roughly from 1850 to 1950, and the information revolution, which occurred roughly from 1960 to 2010, are called revolutions precisely because change occurred at an exponential rate.

Coming Up:

  • Why exponential improvements allow everyone to improve;
  • Why exponential improvements to computer chips were the bedrock of increased efficiencies over the last 50 years;
  • Why these improvements are hitting a wall;
  • How the market has responded to this wall; 
  • How capital and labor are affected to the end of exponential productivity growth;
  • How developing countries are affected by the end of exponential growth, and what it means ofr the U.S.
  • What does all this mean for long term human development.


The Death of Moore's Law is Changing the World - Short Version

Moore's law has lead to exponential computational growth over the past 50 years, but it's been dead since roughly 2010. The consequences are incredibly important, and listed below in reverse chronological order (future events come first, past events come last):

  1. Increased global income inequality;
  2. The rise of China's influence at the expense of the U.S.'s leadership hegemony;
  3. Increased tension between workers and capitalists;
  4. Worker productivity becoming flat;
  5. Stagnant technological innovation in computers and healthcare;
  6. More reliance on branding, advertising, and influence as a marker of value;
  7. Greater reliance on custom computer chips (ASICs) rather than general purpose processors;
  8. The transition to cloud computing;
These issues, some which have already happened, and some which will happen, are a direct result of the death of Moore's law. I don't have space to put out an explanation right now, but will try to follow up.

Friday, July 14, 2017

Moore's Law is Dead

My current laptop is three years old, a then top-of-the-line Dell XPS 15. After three years, I felt I could use an upgrade, so naturally I went to dell.com to check out the latest and greatest XPS 15. The newer edition sports a nicer screen and the RAM is up-gradable to 32GB instead of 16, but the other specs are pretty similar.

Perhaps the most disappointing aspect of the newer XPS 15 laptop is the CPU. The new XPS 15 uses an Intel Core i7-7700HQ, whereas my three year old XPS 15 uses a i7-4702HQ. According to cpubenchmark.com, the newer chip gets a speed score of 8987 whereas the older chip gets a speed score of 7523. That's less than a 20% improvement in speed after three years of R&D. Even worse, the newer chip uses a 45W of power whereas the older one used 37W, which is more than a 20% increase in power usage and battery depletion. Obviously, it's unclear whether we can rely on cpubenchmark.com as a linear description of performance, but that is ostensibly the purpose of the benchmark.

I pretty saddened that after three years of developing the latest and greatest chips, it seems that Intel has only increased speed to the detriment of power usage.

Wednesday, October 24, 2012

Django + Google App Engine + MapReduce

If you're using Django-nonrel on Google App Engine, mapreduce will not work out of the box. I put a bit of work getting it running. Fortunately, I was not the first. This blog post suggests some code to get you started and allow you to run a mapper on all of our entities.  Unfortunately it only allows you to map app engine entities, not Django entities.  The code below fixes that issue. It works in a similar way, but performs a Django "get" before running the mapper to convert a key into a Django entity. This adds a bit more overhead; one more get per map.

class DjangoEntityInputReader(AbstractDatastoreInputReader):
'''
 An input reader that takes a Django model ('app.models.Model') 
 and yields entities for that model
 '''
 def _iter_key_range(self, k_range):
   query = Query(util.for_name(self._entity_kind)
            ).get_compiler(using="default").build_query()
   raw_entity_kind = query.db_table
   query = k_range.make_ascending_datastore_query(
            raw_entity_kind, keys_only=True)
   for key in query.Run(config = datastore_query.QueryOptions(
                              batch_size=self._batch_size)):
      yield key, eval(self._entity_kind).objects.get(pk=key.id())


 @classmethod
 def _get_raw_entity_kind(cls, entity_kind):
   '''
   A bit of a hack, returns a table name based on entity kind.
   '''
   return entity_kind.replace(".models.","_").lower()


To use code above, you would place the above class in your views.py and use the following in your mapreduce.yaml:

- name: My mapper

  mapper:

    input_reader: myapp.views.DjangoEntityInputReader

    handler: myapp.my_mapper

    params:
    - name: entity_kind
      default: myapp.models.MyModel

That's all you need to get mapreduce up and running, but there is an additional problem.  Mapreduce uses a property called "__scatter__" to scramble up the entities and assign them to a proper map reduce shard.  However, Django does not have the __scatter__ property, so what happens is that all of the entities get assigned to a single map reduce shard. You do not get to enjoy the massive parallelism of mapreduce. In order to make the change, you'll need some code of mine, which I posted here. Feel free to please contact me if you have any questions.

Sunday, September 30, 2012

PACER API with REST Interface Released

I had previously written a short blog entry on my open source PACER API. The open source project is ongoing, but I have recently devoted my efforts to Docket Alarm and its online PACER REST API, which is now substantially complete.

Docket Alarm's API allows users to search for docket information from Federal courts and pull the information using a simple REST interface.  The API has a wide variety of potential applications, especially for due diligence.  For example, an application that assists in originating loans could use the API to automatically look up a potential creditor's bankruptcy or litigation history.

The API can search by name, geographic location, date range and a number of other fields.  Additional fields can be added by request.  Once a search is complete, the API can access the case's docket text and associated meta-data. The meta-data contains fields like the judge's name, all of the party names, and the lawyers associated with each party. Finally, the API allows you to pull individual documents as PDFs.  Put together, it is a relatively complete set of features for a variety of applications.

The API only exposes a small subset of the features the features available on the greater website Docket Alarm.  If requested, additional features can be added.

The API specification is currently live and fully documented. Documentation is located here. If you are interested in using this feature, please let me know.

Tuesday, April 10, 2012

9th circuit rules that violating a website's terms of service is not criminal.
http://ping.fm/8Ei07

Monday, January 23, 2012

U.S. Courts PACER: An Accessible, Open-Source API

Get Access to All Information on the U.S. Courts Docketing System

Anyone who has tried to look up a court case on a government website has run into the Public Access to Court Electronic Records system, or as everyone calls it: PACER. I have developed and just released a new API, that gives programmers access to all public information on the U.S. Federal Courts docketing system.

Features include:
1. Search for cases by party name, docket number, and filing date.
2. Retrieve the names of parties to a case, their attorneys, and law firms.
3. Download the entire docket of a particular case.
4. Download pdfs of individual filings and their attachments.
5. Keep track of costs of each PACER transaction.
Right now, there are hooks into all Federal District Courts, most Appeals Courts, most Bankruptcy Courts and also the I.T.C. I am not aware of any other service or API which offers something similar for the I.T.C.

This project does not make PACER free. It still costs $0.08 per page (which can add up quickly). Although the API works perfectly as stand-alone python, it can plug into Django (or any other python framework) very easily. There are also hooks (and some meager documentation) to make it work on google app engine.

Also note that this project is released under the AGPL, a free and open-source license, but one which requires you to open-source your code if you use it in a program or a web-app.

The project can be found:

I am building a web-service which exposes a REST API to PACER and it will use this open-source API. If you are interested in learning more, let me know.