Saturday, June 07, 2008

When the Internet becomes Electricity

In the 19th century scientists including André-Marie Ampère, Michael Faraday and George Ohm made discoveries which to this very day underpin the very substance of Western society. These individuals not only made key discoveries in the field of electricity but also paved the way for its useful application both as a commercial and domestic entity and formed the foundations of the second industrial revolution, which some say was fuelled by electricity.

Indeed, the early 20th century saw an explosion in the growth of electronics with individuals such as Lord Kelvin (telegraphy), Alexander Graham Bell (telecommunications), Logie Baird (televised images) and many, many more whose inventions and discoveries would eventually turn electricity from a rare commodity to a basic need.

Speaking form a personal perspective, a few weeks ago I realized just how much I rely upon electricity when we had a power outage. I realised that not only was I unable to use the Internet to research my work, I couldn’t have a hot shower, watch T.V, power the central heating (even though it is gas), cook any food or hot drinks, wash my clothes and even use the phone (which relied upon mains as well as the power form the phone). The prospect of being without what was until this point my invisible slave was somewhat difficult to contemplate. My way of dealing with this was to go to a local coffee shop and order myself a fried breakfast.

The 20th century would not only play host to the second industrial revolution however. After the first and second world wars research had begun into micro electronics and its applications in computing which had so far existed only as mechanical devices. In 1965 Gordon Moore made his famous prediction which would become a driving force behind the growth of integrated electronics in computer chips. It wasn’t until the early 1990s, however, that the next revolution would begin.

In 1990 Tim Barnes Lee and Robert Cailliau pioneered the first HTTP communication over the Internet and in 1993 the first graphical web browser (NSCA Mosaic) pushed the World Wide Web into a spiral of uncontrolled growth. So rapid was this growth it resulted in a huge rush from the commercial sector to cash in on its money making potential. Obviously, this rapid growth was unsustainable and the whole lot came crashing down in 2001.

Since, advances in high speed Internet connectivity have made the humble broadband connection available to all but a few of us and for today’s business a web presence is a must. In addition, the Internet is penetrating further into the lives of the individual; wireless allows connectivity to fade into the ether and reliability on online services such as web mail, photo and video sharing and online shopping is increasing. Many businesses now use VPN’s to allow employees to work on the go and from home. This has in turn increased the number of wireless hotspots.

In the 1990s the World Wide Web and all it had to offer seemed an exciting prospect, however, its infancy was demonstrated by what would become known as the dot-com-boom; businesses had to adjust their plans to fit the more competitive environments present in the real world to make it a viable commercial entity. The Internet is now coming of age and it is beginning to find its niche within the commercial sector. The growth of information technology and the so called digital revolution mirror the industrial revolutions of the 19th and 20th centuries.

The digital revolution is by no means over and the Internet is still very much in its teenage years. As with electricity we are beginning to see information technology and the Internet go down the path of ubiquity. There is often a lot of hype over the idea of a “ubiquitous Internet”, however, my opinion is that it will happen, but the transition will be a lot more subtle. We are living in interesting times as the next ten years will see the Internet mature and become the electricity of modern times.

So what are the implications of the Internet becoming our new electricity? To avoid confusion it should be noted that it won’t be a replacement for electricity and in fact nearly all Internet technologies require electricity in one form or another. Above, I outlined my personal experience of one hour without electricity and the restrictiveness it posed on my daily activities. If one day we are to rely on the Internet to this extent, not only will we have to worry about power cuts, but we will also have to worry about connectivity and availability, bandwidth and hardware.

The Internet is and will make the individual further reliable on technology and less able to function at the lower level; a good example is how many of you rely on your cell phone calculator to do maths, your sat-nav to get you to work every morning and a cleaner to tidy your underwear? What happens when you find yourself stranded in a jungle with no food, no phone signal, no GPS and no cleaner? The question is how much longer is this increasing reliability sustainable and what problems if any will it pose in the future?

Thursday, December 06, 2007

Urban Whispers

At work today I came across an interesting email titled: "Message to Employees - a tale that is a bit worrying if it's true.... Stay Safe"

The text read as follows:
Warning - We have received a warning from the London Ambulance Service of activities in their area. Whilst the below behaviour is not common place in our area I have spoken with Greater Manchester Police and their risk assessment of the action is to circulate it as a potential'

The London Ambulance service have units closely associated with the Police based in South London who are basically Fighting Gang Crimes. The 'street gangs' in London (particularly South London at present, but it is sure to spread) have initiation tasks which new gang members have to carry out to be admitted to the 'gang'. The latest craze is to drive around, deliberately with no lights on their cars. The first person who 'flashes' them, points at them or sounds their horn at them, has to be followed by that new gang member in their car, who then has to fire a shot into that vehicle with no regard as to who is inside.
Our official instruction is that if we see a vehicle with no lights on, we are NOT to 'flash' it etc. and the advice to friends and family is that you should ignore any vehicles you see without lights. I would ask that you pass this info on to all your family, friends and colleagues and who knows, it may save a life.
A quick Google search of a few of the terms in this email revealed that it is an Urban Legend. Notice also that the News Shopper story was published in May 2004 and that the legend first came to light in 1993.

The first question to come to mind is why make up such stories? A story which causes surplus email traffic, incites fear in the reader, wastes the time and resources of law enforcement authorities and in this case could be the in-direct cause of a motor vehicle accident.

Urban legends however are far from modern as their name may suggest. These stories can be compared to modern day examples of Chinese whispers, which quite literally in the majority of examples have no more than a whisper of truth to them. Since the Internet revolutionised the world these Urban Legends have been able to spread like virus' around the world. Their spread being facilitated by email, word of mouth, SMS and in some cases even news publications and school syllabus', examples:
  • Sneeze with your eyes open and they'll pop out.
  • The Coriolis effect in bath tubs.
  • Microwaves cooking food from the inside out.
One of the recent myths that made its way into the press was the Premium Rate Phone scam. It was reported in the news by several organisations including Saga that pressing 9 upon receiving a phone call saying "You've won a holiday." will put you through to a £20 per minute premium rate line. ICSTIS in fact confirmed that this was not the case and in order to be charged a premium rate one would need to call a premium rate number. Even then the maximum charge is only £1.50 per minute.

HowStuffWorks, states that Urban Legends are often cautionary tales; some morbid; some with morals and others that are just humorous. The origin of Urban Legends is often un-known and going back to the Chinese Whispers analogy they are open wholly to interpretation and therefore exaggeration. While their source is often a true story the end result is likely to bare little or no similarity to the original.

In conclusion Urban Legends such as the "Car Lights" have that cautionary element; but they also include a scare tactic. It could be argued that distribution of such stories is both unethical and potentially dangerous. Any such story should be backed up by a link to a recognised and respected publication and verifiable to from other sources. But sometimes; as demonstrated by the Premium Rate scam example, even this can sometimes be misleading.

My personal recommendation is if such an email arrives in your Inbox - treat it like spam and delete it. A Google search of a few of the phrases will often tell you whether of not it is an Urban Legend.

Sources:
HowStuffWorks, News Shopper, New Scientist, Hoax Busters

Friday, February 09, 2007

Composite Primary Keys - Friend or Foe?

Effective data modelling can be a daunting task. Ensuring robustness, efficiency and normalisation are crucial and may determine the success or failure of the system(s) it relies upon. Fortunately the relational DBMS makes this task a lot easier by providing the infrastructure with which to model the data and the tools with which to do it.

One of the key practices when modelling data within the a RDBMS is the use of a unique identifier for each relation. A primary key. The primary key not only serves as an identifier unique identifier, it is also an index (speeding up searches using the key exponentially) and cna be used in relationships with other relations.

As such a primary key is often a single column of an integer data type with many database management systems providing a facility to have its value generated automatically upon insertion of a record.

It is also possible however to create a composite primary key containing two or more columns. Its their use that resulted in a heated discussion between myself and a fellow programmer recently. The arguments against their use are quite logical:

  • Uniqueness is spread across multiple columns. This means that any queries require at least as many columns included as the composite key to guarantee a unique result.

  • The primary key index is slower as it contains more than one column, this introduces a slight delay when inserting new records as the index may need to be rebuilt.

Suffice to say, I think this fear of composite keys stems back to their often inappropriate use in Access, where the inexperienced database designer is tempted to use several (maybe more) fields in multiple relations to ensure a user does not by accident, duplicate data:


Such use of a composite key is definitely not a good move, while it ensures in some part that the same customer is not added twice - it is grossly inefficient and adds unnecessary complexity. Many however hold the opinion that a composite primary key should never be used. This is where I disagree. Consider the following, often common scenario that models an order processing system.

Here, the order item relation uses a primary key of its own and the two foreign keys which relate to the order and product relations.

Each order may have one or more products but the same product cannot be added to the order more than once (the purpose of the quantity column is to specify the number of products). Using the above model, you must check using either a trigger or in the application using the database that a duplicate is not going to occur before adding a new product to the order.

Using a trigger, you could silently update the quantity should a duplicate be found. But without the use of a trigger or stored procedure there is no other way of ensuring a duplicate item is not added to an order. Enter, the revised model:

Here, a composite key in the order_item relation is used containing the order and product Id's. The uniqueness is now spread across two columns and their is no ID column.

In my opinion the use of a composite key provides several advantages:

  • The uniqueness of each order item is ensured at the data model level without the requirement for a trigger. A side effect of this is two more advantages:

    • Where a database is accessed by multiple applications, relying on these applications to check uniqueness for you is dangerous and cannot be relied upon, especially if you are not the one developing the applications.

    • Using a trigger or stored procedure will solve the problem of uniqueness, but there is more overhead in processing a trigger than simply checking a composite key.

  • The order_id and product_id fields will benefit from indexes regardless of whether a composite primary key is used. Using the composite primary key reduces the number of indexes on the relation.

  • Although queries will require that you search using both parts of the key, it is rare that you would need to run a query which returns just a single line within an order.
It is my opinion that in some situations the use of a composite key is preferred and can be advantageous. Casting aside those diabolical access databases that use composite "mash up" keys, they can help achieve a level of security and maybe even efficiency within your database system.

Yes I did use Access for the diagrams - one thing it Excels at (pun intended) is rapid creation of ERD's.

Monday, November 20, 2006

Multiple Tabs and Windows with session_regenerate_id()

I was asked a week ago to improve the security of the login system used for one of the sites I manage. Prior to being asked the data displayed on the site was not of a sensitive nature - but a recent project I have been working on requires a little more security.

I decided to do two things:
  1. I added a Javascript to MD5 hash strings. Within the login form a random string is placed in a hidden field. This is stored in the server side session. Upon pressing the submit button this is concatenated to the password and the entire string it Md5 hashed.
    • This first ensures the password is not sent in plain text across the unsecured connections.
    • Second, it ensures that an intercepted Md5 string cannot be used again to logon. As it is generated using of the password and the random string fixed to the users session.
  2. Additionally I decided to use session_regenerate_id() with every single request. This would mean the session ID is only ever good for one request. Upon changing the session ID, the old session was of course deleted.
The first change worked fine and for those who don't have Javascript enabled, the unencrypted password and login string is used. However, a couple of days after sending out the update for the site I started getting complaints from some users that they had to keep logging on.

Upon viewing the logs, it was clear that some users were having to logon several times during their session. I did some investigation and came up with the following cause.
  • It was clear from the logs that the cookie being sent contained the old session ID that had not been updated on the users PC after the call to session_regenerate_id().
  • As the old session had been deleted it appeared as if the user had attempted to request the service without even starting a session and logging on.
  • The previous valid request was at exactly the same time or less than 2 seconds before the failed request.
After servicing a request, I have set the application up to dispatch the request to another script (designed only to display the result) via an HTTP redirect using the Location header. I concluded that the redirect was taking place before the web browser had a chance to update the cookie and as such sent the old session ID.

The reason for this? - being unable to replicate the error on my system I concluded that the users PC was not fast enough to update the cookie before redirecting. Therefore I modified the application to ensure that session_regenerate_id() is only used on a request that has not been redirected. I sent the update and hoped this would fix the problem.

Today however, I checked the logs and it was clear that a number of users were still experiencing the same problem despite the modification. I needed to revisit the problem!!

I trawled the logs again and after some time noticed the previous request was within a second of the failed request again. How was this possible? How can one person make two requests in such quick succession? After some thought it came to me - it is just possible that the user has multiple windows/tabs open on their desktop. They are typing in the data (in this case a single order number) and clicking the submit button in each window, one after the other. So, I opened a few windows in Firefox and entered a few order numbers. Then clicked the submit button in each window at a modest speed. Unsurprisingly, this replicated the error and is the most likely cause of the problem the users are experiencing.

I have temporarily disabled the call to session_regenerate_id() while I come up with a solution. It demonstrates once again the varying ways in which your users will use your application and how crucial the logs were (in this scenario) to solving the problem. An example of which is posted below:

Mon, 20 Nov 2006 08:04:38 +0000 123.123.123.123 /ordersearch_display.php
User agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)
Get Variables: Array()
Post Variables: Array()
Cookies: Array( [sid] => 5442a47367571d58101184c1c71df2c4)
User Authenticated: USERNAME
Module: ordersearch loaded.
Module: ordersearch executed for output.
Regenerating Session ID old=5442a47367571d58101184c1c71df2c4 new=5e507a0b72f3dc47501f60316721e48
--------------------------------------------------------
Mon, 20 Nov 2006 08:04:40 +0000 123.123.123.123 /ordersearch_input.php
User agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
Get Variables: Array()
Post Variables: Array( [order] => Array ( [ordernr] => 12345 [ord_seq] => ) [submit] => Retrieve Order Data)
Cookies: Array( [sid] => 5442a47367571d58101184c1c71df2c4)
User Not Authenticated

Friday, November 03, 2006

A Piece of Information I'd like to have!!

I have my own home server which runs on the Linux Operating System. I use it to store the music and documents for the whole family as well as host a web, FTP and SSH server and act as a development platform for my web applications.

While I have put the time and effort into ensuring the server is as secure as possible. I am none too aware that there could and probably are security holes and vulnerabilities exposed by services that could be exploited by an attacker. I would like to know how many times (if any), my server has been compromised and by whom.

If I were to have this information, I would first find the vulnerability or hole the perpetrator used to gain access and secure it to ensure it is not exploited again. Secondly I would seek to learn some of the skills employed by the perpetrator that were used in order to gain access and use this knowledge to hole foresee other possibly holes a vulnerabilities in my setup and configuration.

What would you do?