Programming Concepts and Protection

Software is everywhere.  It is the bit that people think of when using a computer. Finally, the software is the systems that interact with all the other computer systems out there.  Think everything from Freeview menu’s on your TV to the buttons that control your washing machine and upwards. 

Ultimately software is our gateway to accessing data of all different types (think gatekeeper or sentinel).

As a result, it needs protecting in its own right.

Software Landscape

First up you need to recognise the software landscape as it stands today. Also, consider the emerging future and the challenges that poses.  With the knowledge of the different types of software in use in and around your organisation, you will have a greater appreciation of the assets that need protection; software needs to live on physical devices, software controls access to data and software gets used by systems and people.

There has been a long-standing trend that software is becoming more open.  By that, I don’t mean less secure, although there is that possibility. Instead, I mean that the interoperability between software systems is becoming more and more common and more and more capable.  The question is no longer “why on earth would people want to connect to this” but more “what cool things could people do if they could connect to this”.  In itself, this is not a problem, though it does allow another avenue for exploiting poorly designed or written software.

Different Levels of Software

As an extension of this interoperability, software on complex devices such as workstations, operate on many levels.  At the lowest level you have firmware: commands that are directly programmed into physical devices. 

A step up you will find device drivers, this is the layer of abstraction that allows physical devices and their firmware to talk to the operating system. 

The next layer up is the operating system which is the management layer as it is responsible for coordinating resources. 

Further up the chain, you will find applications and services. These are the software frequently people are most interested in. They allow you to view the web, type a document or access a database. Usually they install and quite dependent on having a stable environment to live within. 

Finally (though some may disagree) there are utilities.  Utilities are usually very small standalone applications that perform specific basic tasks. All this makes the life of an information security profession quite interesting and certainly can make it complicated.

Closed Source vs Open Source

In particular, Open Source software carries more meaning and implications than are immediately relevant here.  As CISSP CBK describes, “Open Source” has many competing definitions a common theme for all of them. However, is that the source code (i.e. that before it is compiled) is viewable by the general public. 

There is a long-running debate regarding the security of these two policies.  On the one hand, Open Source advocates claim that by opening the source, anyone can spot bugs and security vulnerabilities and even submit corrections.  On the other hand, if you can spot vulnerabilities you can take advantage of them and you don’t have to report them.  The open source mentality is also up against the concept of competitors using your source code to create something similar and thus make it easier to compete.

Disclosure Policies

Disclosure policies refer to the process by which the discoverer of an unknown vulnerability discloses what they have found. 

This has developed over time in response to socio-economic pressures.  No disclosure and full disclosure sit at two opposite ends of the spectrum.  Currently, the best considered disclosure is a hybrid.

First, the discoverer attempts to contact the vendor.

Then the discoverer waits a reasonable period of time, if the problem is patched, full details can be given to the security community, if not further attempts can be made to contact the vendor, if in vain headline details can be released to the community, a reasonable wait till follows and then more contact attempts.

The last resort should be to give out full details but should all the previous go ignored it is generally considered better to have a publicly known vulnerability than one that is being ignored.

Programming Languages

In brief, different programming languages have different capabilities and come from different eras of computing.  As a result, each language has it’s own set of security considerations. 

Just taking relatively modern languages as an example.

People are treating Javascript and Java (no relation) very differently in their environment.  One is a language that interprets and runs inside internet browsers. The other individuals can semi-compile it and run in a sandbox environment. 

It’s also not as simple as saying that newer languages are more secure – they are simply different. 

Old languages allow the programmer more control over their creation and as a result writing secure code requires a greater level of skill.  Equally modern languages remove a lot of easy mistakes from the programmers’ ability. But in the process, new languages allow programmers with less-skills to use them. These programmers might don’t necessarily realise how malicious characters can take advantage of their code.  Ultimately, however, each programming task it best suits to a specific language, so as long as the security considerations are taken care of there shouldn’t be a problem.

Software Vulnerabilities

Software vulnerabilities come in many shapes and sizes:

  • Buffer Overflows

Buffer Overflows are as old as the silicone chips the computers are made of.  In short, a buffer overflow occurs when more data is provided to a memory location than it can hold.  The computer will try and store all the data regardless and as a result, may overwrite other code.  The clever bit is where the attacker can get the “extra” data into a memory location that gets executed later on.

  • Citizen Programmers

Operating systems and applications can come with their own scripting languages.  Typically these languages have low entry barriers and as a result you end up with untrained individuals writing and executing tools.  These same scripting languages can typically be given substantial access to the operating system et al.  Sometimes, if you put two and two together you really do end up with “hacked”.

  • Covert Channels

This is any method by which two cooperating processes transfer information against the security policy of the organisation.  Quite a vague definition there, but that is because a covert channel is likely to be situation specific.  An example would be where an application creates and deletes a process ID file on a Linux server.  The existence or absence of this process ID file could equate to binary which whilst a relatively slow process could transfer information should another process be monitoring the situation.

  • Malware

Good old viruses and other such malware!  These could be designed to attack a specific program not just the underlying operating system!

  • Malformed Input Attacks

Malformed input is any attempt to put data into an input field that should not ordinarily belong there.  The example given is that of a URL given in Unicode encoded characters instead of ASCII.  Firewalls and content filters etc may well ignore or simply not understand these requests, but the web browser will be default just get on with it.  Malformed input attacks can get used as part of, or alongside other attacks such as buffer overflows.

  • Memory Reuse

Memory reuse is simply where a memory location and thereby its contents gets used by a process or function that it was not intended for.  Certainly in older languages variables had to be initialised, in part to make sure that they did not already contain information that would a) change the execution of the program or b) provide erroneous information.

  • Mobile Code

Is any code that is sourced from another machine over a network but is executed locally.  They come in different shapes and sizes and there are a number of sub categories with slight differences.

  • Social Engineering

Applies here as well as every other technical and non-technical situation.  Requires creative thinking on the part of the attacker to implement.

  • Time of Check and Time of Use

This is where the check happens, for example authentication, and then the use happens later, allowing the user to gain access to a system outside the allowed time.  The most obvious occasion for this is when a member of staff is fired – they have a brief opportunity to cause damage to the information systems or perform data theft whilst they are still logged on as disabling a user account does not necessarily immediately log the user off.  This is not just limited to human interactions, the example given in the CBK is of a comms line dropping – should an attacker pick the line up before it is noticed, they would gain access as per the previous session.

  • Between the Lines

This is where comms lines are tapped into, either for reading data off or injecting data into.  Physical security and encryption are the best methods of mitigation.

  • Backdoors

Backdoors can be made by rogue programmers or intentionally by organisations to facilitate ease of administration or some other form of control over their product.  These can be used against you.

Protection

Scared?  You shouldn’t be.  The best way of protecting yourself against software attacks is a good development plan.  There are pitfalls though…

First of all you should choose a suitable software development life cycle.  Each life cycle model will have different phases of development, whether that be a simple set of three – concept design and implementation or:

  • Project initiation and planning
  • Functional requirements definition
  • System design specifications
  • Development and implementation
  • Documentation and common program controls
  • Testing and evaluation control
  • Certification and accreditation
  • Transition to production
  • Operations and maintenance support
  • Revisions and bug fixes
  • System decommissioning and replacement

During all stages but in particular at the planning and initiation stage the security consultant should be concerned with the data in its varying forms and the value it has or the value it will have.  Questions to be asked include; what are the data classifications? How will the operation of this application affect or expose the data?  Will this system need special consideration in a Disaster Recovery plan?  Do we need to be able to reconstruct users actions in an audit trail or be able to utilise older versions of data?

At the documentation and testing stages, there are additional sets of questions that are pertinent. Are fields sanity checked?  Is everything that is required to be, logged?

Learn how YGHT can help you increase your cybersecurity

This entry was posted in CISSP. Bookmark the permalink.