Nowadays when mankind explores Mars, it seems that problem of information protection should not exist at all. But we see and read mentions about new viruses and gaps in security systems and about hacking all the time. We can not rectify situation if problem is in other people's mistakes, but we can try to avoid our own ones.
Everyone does his own tasks in modern society -- someone programs well and someone repair cars well. But such situation has large disadvantage - we become dependent on other people. Besides we depend on them we need to give them very important information sometimes. Banks have information about our accounts and purchases, hospitals know everything about our health:
Probably everyone has some information which he would not like to lay open to the public or lose. But you cannot be sure that software installed in bank has no mistakes, can you? So we hope for the best, choose best firms, trusting that if it is the best it will never let us down! And to some extent it is so. Trusted firms are in earnest about their clients' and employees' information security. Thus if you want to see you company in the top list you have to think of security. Are products you make secure? How secure can your clients feel?
Data security problem in multi-tier, client-server and network applications
Long time ago, when Novell DOS 7.0 was installed on my computer, I used its outstanding feature to lock files with password. And how upset I was when accidentally got access to protected files from Windows for Workgroups 3.11 (with 32-bits disk access on). In general for opening files closed with password it was enough to use MS DOS diskette to boot. But it was long time ago. And how do matters stand now? Do you think anything has changed? Let's take for example Windows XP. Probably many people heard and even used programs that let one get access to the data bypassing the operating system protection.
While working with network applications the user exposes to danger much more information, since the attacker can access not only data stored but also information transferred over network.
Main security threats and their description are listed below.
Unauthorized data access - kind of threat when unauthorized person gets access to confidential information. It can lead to situation when such information becomes public or is used against its owner.
Companies and private users use open communication channels for data transfer. So data transfer over such channels is in extreme need of protection in order to save confidentiality.
Possible causes of unauthorized access to secret data are:
- network traffic transfer in clear (not encrypted) form;
- absence of authorization mechanisms for access to secret data;
- absence of access isolation mechanisms.
Unauthorized data modifications - kind of threat when data can be changed or deleted accidentally or intentionally by the person that has no permissions for such actions.
Threat of this type can damage data integrity or have an influence on information that is not directly linked with modified data. Such modifications are especially dangerous since they can be left without attention for a long time.
Possible causes of unauthorized modification:
- absence of data integrity verification in software;
- password sharing or leakage;
- easily-guessed passwords;
- passwords keeping at easily accessible places;
- identification and authentication schemes are absent or weak.
Users of Internet and other communication channels run to the most danger when such channels are not controlled by company that uses these channels. Even when talking about the company LAN (local area network), which might seem to be protected from outside attacks, it can turn out that some of employees would like to use secret information to satisfy his own needs.
The worst and most dangerous in such situation is not bad system security itself, but the fact that the user believes that he is protected and he's mistaken. Most users do not know computer and software well enough to be able to tell whether the system is secure or their data are in danger of unauthorized access. So the developer must take care of user's security. Developer must foresee possibility of attack on the data stored on user's computer as well as possibility of attack on the data during network operations.
Data encoding and encryption
One of steps necessary to protect the data is data encryption. Encryption is the process of transforming the data into some sequence of bytes using one of encryption algorithms. The primary goal of encryption is to hide the data from being visible and accessible without having the key. Very often protection of data is performed by making the algorithm, used to transform the data, unknown. In other words the author of such "protection" thinks that if the algorithm is not known, the data is properly protected. This is not encryption, but encoding. Revealing the algorithm makes such "encryption" defeated easily. And the algorithm can be discovered from the software that uses such encoded data. Sometimes it is possible to discover the data without even knowing the algorithm details.
Encryption is what is done with encryption algorithms. Those algorithms are well known and have been carefully analyzed by cryptography specialists and mathematicians. The strength of such algorithms is tested and proved again and again. The only secret part in encryption is the key, used to encrypt and/or decrypt the data.
The level of protection is determined not only by algorithm itself, but also in the way how the algorithm is applied. Internet security protocols, for example, take special care about how the keys are created and used.
Symmetric encryption algorithms
Special algorithm and key are used for encryption. The same algorithm and key are used for information decryption. That's why such encryption method is called "symmetric". Another name is also used - secret-key cryptography.
Let us assume that you want to hide some important information from violator. You take special program with one of popular cryptographic algorithms embedded, and command this program to encrypt important data. On program finished you will get encrypted file and set of bytes which is the key. Key is not large usually and some times can be presented as text for easy perception. Now it is enough to store key in secure place. And place of encrypted data does not mean as even if violator get access to the data he will not be able to read it without the key.
When you want to decrypt data you just have to give the program encrypted file and key.
Fig. 1 Symmetric key is used for both encryption and decryption
Advantage of this method is that you need to keep safe only small key but not the whole data. Key size does not depend on size of encrypted data. But such encryption method becomes useless when you need to pass data over open communication channels. If you transfer secret key over the same channel there is no sense in encryption (everyone who can get information can get key as well). And if you have a channel secure enough to pass the key, you can use it to transfer the data itself without encryption. Special key-exchange algorithms are used to address this problem solving but we'll talk about them later.
As almost any sequence of bytes can be used as a key (assuming that the sequence length corresponds to requirements of the algorithm), random-number generators are used for key creation. The main task during key generation is to create unique key, since security depends on key uniqueness very much. The better generator is the less possible is that someone will be able to guess what number will be generated next. To check how good generator is and if sequence it generates is really like random one cryptography specialists use statistical tests for randomness.
Random-number generators. Really random number can be generated only using special devices. Such generators get unrehearsed data from environment. For example some parameters of radioactive decay or surrounding atmospheric conditions can be used, or minor changes of electric current. It's easy to see that replication of conditions on which base random number was generated is practically impossible. That's why such generators are good enough. An alternative is to get the random data from computer input devices such as mouse (by asking the user to move the mouse for some time).
Pseudorandom number generators. Pseudorandom number is generated in two steps. First program gets some parameters that are changing with time, for example system time, cursor position etc. At the second step program calculates digest or hash-function. Digest calculation algorithm creates new sequence of bytes according to the data given to it. If we use the same parameters as input data for such algorithm we will get the same digest. But as soon as we change one bit in input data we will get a very different digest.
The question arises why we have to do the second operation when we already obtained "random" numbers during the first step. But such parameters as time or cursor position can be easily enumerated and tested one by one. So such data without further processing can not be called really random.
Not every hash calculation algorithm is usable for cryptography purposes. Only specially designed digest (hash) algorithms can be. Several hash algorithms are popular today. Let's see their short description.
MD2. First Ron Rivest created algorithm for digest calculating and named it as MD. Then he found out how to improve this algorithm and made next variant MD2. This algorithm returns 128-bits digest so number of possible variants is 2128. Unfortunately, some gaps were found in this algorithm later and it is not recommended to use it now.
MD5. After several unsuccessful attempts to create algorithms MD3 and MD4 Ron Rivest was able to offer new really good algorithm MD5 which gained popularity. This algorithm is more secure and fast than MD2 and it creates 128-bit digest too.
SHA-1. This algorithm is like MD5 (Ron Rivest made his contribution to SHA-1) but it has better internal structure and returns longer digest of 160 bits. It was approved by cryptoanalysts and cryptography community strongly recommends it, when choosing between MD5 and SHA-1. However recently it was discovered, that SHA-1 can be attacked rather successfully, so stronger algorithm (such as SHA2) should be used, if possible. SHA-2. This algorithm supports hash lengths of 256, 384 and 512 bits and is the preferred algorithm at the moment.
Block and stream encryption in symmetric algorithms
You already know how to get a key and your data is ready for encryption. Algorithms of two types are used for encryption: block algorithms and stream algorithms.
Block encryption. Such algorithm splits data into blocks and encrypts each block separately with the same key. If data size is not multiple of required block size, then last block will be enlarged up to necessary size and filled with some value. When encrypting with block algorithm if you encrypt the same data with the same key you will get identical results. Usually such algorithms are used for files, data bases and e-mail messages encryption. There can be variations when key for the next block is based on output of previous block.
Stream encryption. Unlike block encryption such algorithm encrypts each byte separately. For encryption the pseudorandom numbers are generated based on the key. The result of encryption of the byte usually depends on the result of encryption of previous byte. This method has high productivity and is used for encryption of information that is transferred over communication channels.
Attacks on encrypted information
There are two ways to restore encrypted information. You can either try to find key or use algorithm vulnerability.
Key picking. No matter what algorithm is used, it is always possible to decrypt the data by trying all possible keys one by one. This is called "brute-force attack". The only problem is time that must be spent for the exhaustive search. So the longer the key is the better the data is protected. For example exhaustive search of keys with 128-bits length will take several trillions of milleniums. Of course with computer productivity increasing this time goes down but in the near future 128-bits key will stay secure enough.
Use of algorithm vulnerabilities. Unlike previous method this one is based on discovery and use of algorithm vulnerabilities. In other words if the attacker can find some regularity in encrypted text or if he can bypass protection in some other way he will decrease time required to find key or decrypt data. As most of encryption algorithms are published, cryptanalysts all over the world work on them trying to find any vulnerability. As long as such vulnerabilities have not been found in popular algorithms these algorithms can be accepted as secure.
Popular symmetric encryption algorithms
RC4 - stream algorithm. It is used most widely in SSL (secure transport layer) protocol.
DES (Digital Encryption Standard) - block algorithm which uses 56-bits key. This algorithm was designed in the late seventies by researchers from IBM and NSA (National Security Agency). Algorithm was investigated thoroughly and experts came to conclusion that it has no weak points. It was in eighties years of last century. However computers processing speed increased enough in nineties to attack this algorithm by complete key enumeration. Electronic Frontier Foundation decrypted DES-encrypted information in less than 24 hours in 1999.
Triple DES - this block algorithm came in the stead of DES. Principles of work did not change but now one block of data is encrypted three times with different keys. As a result we have 168-bit key. However later they found how to decrease attack time to time equal to complete key enumeration of 108-bit key. In general it is enough for today but in future this might be not enough. The algorithm has one more problem - low speed of processing.
AES (Advanced Encryption Standard) - NIST (National Institute of Standards and Technology) announced contest for new algorithm. One of main terms was that developers must renounce from intellectual property rights. This made it possible to make a standard and let everyone use it without any royalties. All candidate algorithms were investigated thoroughly by world community and NIST announced winners' names on the 2nd of October, 2000. They were two Belgian researchers: Vincent Rijmen and Joan Daemen. Since then this algorithm became world cryptography standard supported by most applications.
Blowfish by Counterpane Systems company, Safer by Cylink, RC2 and RC5 by RSA Data Security corporation, IDEA by Ascom and Cast by Entrust - other algorithms developed by different cryptography companies.
As you can see there are many different encryption algorithms that you can choose from. When choosing symmetric algorithm speed and length of the key are usually taken into account.
Asymmetric (Public Key Encryption) Algorithms
Secret-key algorithms can encrypt data but they are hard to use when you need to pass encrypted data to someone else because you need to pass key too. If you transfer the key over public channel it is the same as if you transfer clear data over this channel. Solution to this problem is in using asymmetric cryptography (encryption with public key) which was developed in 1970-s.
While symmetric cryptography is based on principle that one key is used for encryption and decryption, in asymmetric cryptography one key is used for encryption and another one for decryption. These keys make a pair. Keys from different pairs will never match each other.
Fig. 2 Asymmetric key consists of two parts - one for encryption and another for decryption.
One key is called private and only its owner must have access to this key. It must be kept as a tightest secret. The second key is called public and it is not secret. Everyone can use your public key. Suppose you want to encrypt some data for another person. All you have to do is to encrypt this data with his or her public key. Now no one but this person will be able to read this data. Even you can not decrypt it back (for example, if you have deleted original information). So if you want to get important information you have to generate two keys. You store private key in secure place and you distribute public key in any way: for example you can place your public key on your web site. Now anyone can send you secret data having encrypted it previously with your public key. You just have to use your private key for decryption.
But encryption with public key has one disadvantage. Asymmetric algorithms work much slower than symmetric ones. So when large amount of secret data must be transferred it is encrypted with symmetric algorithm (using symmetric key) and then the key that was used is encrypted with asymmetric algorithm using public key. Thus encryption is quick enough as symmetric algorithm is used and there is no need to transfer secret key as "clear text". Usually each symmetric key is used only once and when next document is encrypted new secret key is generated. As symmetric key is used only in one encryption session it is often called session key. As a matter of fact user can even have no idea that session key was used as he only gave public key to encryption program and all other actions it has done itself.
Fig. 3 After data encryption the symmetric key is encrypted with open key and merged with encrypted data.
Asymmetric encryption systems are based on some one-sided mathematical functions. It means that if you know result you can not renew input data. For instance if you have sum of two numbers you can not tell which numbers exactly were added.
Public key algorithm security
As you already know there are two possible ways to restore encrypted data: to find key or to use algorithm vulnerability.
Key picking. If the message is encrypted as described above, we have two parts: message itself encrypted with the (symmetric) session key and session key encrypted with public key. We have already discussed attacks on symmetric algorithms and keys. And discovery of asymmetric private key is even more complicated task because asymmetric keys are much longer than symmetric keys.
Attacker can try to use the fact that only one private key corresponds to known public key and to try to find this key. But such attack takes even more time. The point is that such attack involves decomposition of large number into factors. But there are no efficient algorithms now which allow such calculations in finite time. So until such algorithm is developed cryptography with open keys can be reckoned as secure.
Use of algorithm vulnerabilities. This attack method probably is the most efficient when we speak about open keys. The fact is that there are no public key algorithms that have no weak points for today. For all asymmetric algorithms there are methods that allow recovery of the key faster than with direct enumeration. But this fact is not critical since it was proved that even using weak points an attack will take too much time. And probability to be lucky enough to find correct value soon early tends to zero. So asymmetric encryption can be treated as secure enough for all modern practical purposes. The only thing you should remember is the longer key you use the better your data is protected.
Popular public-key algorithms
DH (Diffie-Hellman) - Stanford graduate Whitfield Diffie and Professor Martin Hellman researched cryptographic methods and key exchange problems. As a result of this work they offered a scheme allowing creation of common secret key based of open information interchange. This scheme does not encrypt anything, it only makes possible for two (or more sides) to generate secret key that will depend on all members' information but will not be known for any third party.
This algorithm is not used for encryption; its aim is to generate secret session key. Each interacting side has secret number, there are also several public parts known to all members which can be transferred over open channels. To get secret session key these public parts must be combined with secret ones.
Fig. 4 Diffie-Hellman algorithm. One secret value is created using different keys.
RSA. After Diffie and Hellman have published their article in 1976 Ron Rivest (professor of Massachusetts technology institute) held an interest in this idea. So he got two his colleagues (Adi Shamir and Len Adleman) to take part in researches. They published new algorithm in 1978 named it by authors' initials. This algorithm is often used with 1024-bit or 2048-bit key and it became quite widespread.
ECDH (Elliptic Curve Diffie-Hellman). Neil Koblitz and Victor Miller working independently in 1985 came to conclusion that little-known field of mathematics, elliptic curves, can be useful in public-key cryptography. Algorithms based on elliptic curves began to spread in nineties and today they are listed in some countries standards referring to information security.
After the application exchanges the keys, it can encrypt the data being sent. But can you be sure that application sends data exactly where it has to? The attacker could substitute real server for his own one and just send his key during key exchange. And how can you be sure that message you received is from the person you think to have sent it?
Digital signatures are used in order to confirm message authorship. As you already know to encode the message so that only one person can read it you have to encrypt the message with this person's public key. Such message can be decrypted only with recipient's private key. But what will happen if you encrypt message with your private key? It could be read by anyone who has your public key so it will not be secret at all. But at the same time nobody else will be able to encrypt the data in the way that other people can decrypt it with your private key. So only you can do the encryption of the data and anyone who reads the message will be sure that the message was sent by you. As you remember public-key algorithms are slow enough and it makes no sense to encrypt the whole message in such way. Only message digest is encrypted with your private key instead. This procedure consists of two steps. First you calculate message digest and encrypt it with private key. When sending message you attach the encrypted digest to it. Recipient calculates message digest using the same algorithm as you did, decrypts attached digest and compare them. If two digests are equal, then he can be sure that message was sent by you and was not altered during transfer.
Attentive reader can ask, how we can be sure that the public key we have really belongs to specified person. Somebody could try to break into the server with public keys and put his key instead of your partner's one.
Digital certificates are used for authentication purposes.
In brief certificate can be represented as a number of records containing information about its owner and certain cryptographic information. Owner information is usually humean-readable, for example name or passport data. Cryptographic information consists of public key and digital signature of certificate authority (CA). This signature confirms that certificate belongs to the person whose name is specified in the certificate.
You can see that the scheme became more complicated but more secure. For example you want to get digital certificate. Depending on necessary level of certificate security you can either create certificate request and send it to CA or go there personally so they could make sure that you are the one they give certificate to. Then CA combines information about you and your public key into one certificate and signs it using its private key.
To make sure that the message was sent by you message recipient has to do the following:
- get CA's public key;
- verify digital signature of your certificate using the public key of CA.
If the signature corresponds to CA, then the information contained in certificate is valid and can be trusted. And in case of problems CA will be responsible for information contained in the certificate.
But the next question appears - how can we know that the signature belongs to CA? Probably it must have its own certificate confirming its public key. Self-signed certificates are used for such purposes. Self-signed certificate is signed with its owner's digital signature. It means that you also can create self-signed certificate. But it does not mean that other people will trust such certificate. By the way you also should not trust most of self-signed certificates unless it belongs to the root CA.
If you create self-signed certificate for your company you can use it to sign other certificates. For instance you can generate certificates for all company employees (and for them only). This practice allows not only to get as many certificates as you need without spending much but also to increase the level of security inside of your company. Certificates can be used not only by people but by applications as well. It can be especially useful when information is transferred over open channels between applications.
If you develop a complex software application and want to protect transferred data most likely you will have to create certificate infrastructure. Using certificates client applications can check that they have connected to the server they planned. At the same time server application can check if client has the rights to connect to it. If you think that support of certificates is a complicated task, you don't need to worry. There exist several reusable security libraries which help you deal with certificate management. One of such products is SecureBlackbox (http://www.secureblackbox.com). The main task when integrating certificates support into your application is to do everything with security in mind and not to make mistakes in order to avoid security flaws. The best is of course to involve security specialists in to the process.
The most commonly used standard for certificates is X.509 today. It describes certificate format and distribution principles. There exist other certificate formats used in different communication protocols.
Certificate management topics fall beyond the scope of the topic of this article. You will find several certificate-related articles in SecureBlackbox knowledgebase.
Secure transport protocols
Internet growth made secure data transfer necessary. One of first engineering solutions was SSL (Secure Socket Layer) developed by Netscape in 1994. It is widespread up to date and it is integrated into most browsers, web servers and other software and hardware systems dealing with Internet. There are several modifications of this protocol today: SSLv2, SSLv3 and TLSv1. Most popular is TLSv1. SSLv2 is not used due to several vulnerabilities discovered in it.
Secure Socket Layer (SSL) is a protocol for authentication and encryption on session level which represents secured communication channel between two sides (client and server). SSL provides confidentiality by generating secret common for client and server. SSL supports server authentication and optional client authentication in order to resist outside interference, messages substitution and listening in client-server applications. SSL is located on transport level (lower than application level). Most application-level protocols (such as HTTP, FTP, TELNET and so on) can be run transparently over SSL.
Let's look at simplified client and server communication scheme for better understanding of SSL functionality principles.
Client composes client hello message before establishing connection. This message contains information about supported protocol versions, encryption methods, random number and session identifier. After that the message is sent to the server.
Server can answer either with another hello message or with error message. Server hello message is like client one but server selects encryption method that will be used using information it received from the client.
Server can send its certificate or certificate chain (several certificates where all but one sign other certificates) for authentication after its hello message has been sent. Authentication is required for key exchange except when using Anonymous Diffie-Hellman algorithm. Key exchange can be realized with the help of certificates corresponding to encryption algorithms specified during connection establishment. Usually X.509.3 certificates are use. Client obtains server's public key which can be used for session key encryption on this stage.
After the certificate is sent the server can optionally create certificate request message to request client certificate if necessary.
After the last hello message server sends handshake completion message. When the client receives such message it must check server certificates and send finalizing message which specifies that the handshake was completed. Now the sides can start encrypted data exchange.
Both server and client can send finalization (goodbye) message before communication session end. After such message was received similar message must be send in response and connection is closed. Finalization messages are needed for protection from breaking-down attack. If this message was sent before connection shutdown the client can resume this session later. Resuming the session takes less time than establishment of new session.
It is also necessary to mention SSH (Secure Shell) protocol. This protocol resembles SSL in general but has some differences. SSH was designed for message exchange between servers with UNIX and it requires authentication of both sides. SSH supports logical channels inside one secured session. SSH uses key pairs and not certificates for authentication.
Secure transport protocols are effective and tested means for data transfer over public communication channels. These technologies are used widely already. SSL protocol is an efficient solution for development of secured client-sever applications which must use open communication channels. But what you have to take into account is that SSL only provides data encryption only during transfer and the data becomes accessible in unprotected form on the client and server. So security must be comprehensive and well-designed. And communication channels must not be the only secured element.
Security in client-server and network applications
After reviewing main principles of cryptographic protection we can study how to use cryptography in action.
First let's review data transfer over network. When Internet appeared its main target was to make information available for everyone. Everything changes with time and today we want to protect most of information we transfer. We can book plane tickets or hotel rooms and we want to keep credit card number and sometimes destination or time of our trip in secret. On the one hand new technologies provide us with numerous opportunities and conveniences but on the other hand we face the danger of our data being intercepted and possibly altered. A lot of servers still use non-secure protocols for data transmission. Data transferred between local clients and servers is also threatened with interception. Anyone can intercept data transferred over local network. Usually unauthorized person appears to be company employee. Most employees have all necessary capabilities and all they have to do is install a couple of software programs to access to data that belongs to other workers. Statistics says that insiders are the cause of about 90 of 100 unauthorized access cases.
Use of SSL/TLS protocol is enough to secure the data transferred over network. As you already know even if someone can get such data decryption will take too much time. You can ask how to do it. There are many different ways to implement security of transferred data with SSL.
The cheapest way is to use STunnel applicatio, which creates secure channel between two computers. Such communication channel is almost always transparent for application that uses it but requires tune-up and is possible not for all protocols. The main disadvantage of this mechanism is that attacker can access unprotected data on user's computer while the data is transferred between the application and STunnel.
Fig. 5 If application exchanges unencrypted data, third party application can gain access to the data.
It is necessary to say that it is the best way when client and/or server software can not be changed. In other words when you have only executable modules but not their source code. Although the attacker can get access to data on user's computer such protection is better than no protection at all. Stunnel can also be useful if you have integrated SSL support to client-side application but can do this with server for some reasons. Then STunnel can be installed on the server-side. In this case you must check security of the server itself but in general you will get a secure system.
More secure way is to use the components that allow integration of SSL right into your application, for example SecureBlackbox. This way is good when you develop your own application. The point is that integration of the protocol into application increases security. You must use integrated solution in cases when the operational environment is not known or is unsecure.
Remember that you should use SSL connection not only when the data is transferred over Internet but when local networkis are used too. If even one channel is insecure then the attacker can use it to get information he needs or at least something that makes decryption of information easier. So if your system transfers important data over network or at least data that can help attacks in some way, you must use secure connection. It will help you to protect data from both unauthorized access and modifications. Always remember the rule - any system is as weak as its weakest part is.
The attacker can try to access data not only during the transfer, but also when they are on some medium such as hard disk, streamer and so on. Attacker can get access to data both on client-side and on server-side.
Let's examine possible threats for the server. We cannot trust server protection though operation system developers release patches when security problems are discovered. This doesn't always save the situation and can even make it worse sometimes Thus additional protection mechanisms are used besides OS built-in facilities for server-side data protection. While careful system tune-up is up to administrator, we can examine database protection in more details.
Database can be protected in two ways. The first way is to deny access using database server. In this case database server checks all passwords and access rights. Disadvantage of this scheme is that if attacker gets access to the server he will get access to database. For example create database at one computer and protect it with password using database server. Then create database with the same name on another computer and protect it in the same way but use another password. Then copy the first database to second location. Now you can access the first database using password you've set for the second one. This happens because access control information is stored not in the database, but in database server configuration. So if you use such database server you must protect it very well and think about preventing not only database modifications but copying of the data files as well.
Another way of data protection is encryption. Some servers have built-in encryption capabilities and there are even special SQL commands for these purposes. But it must be noticed that encryption slows down performance and has certain specifics.
The most attention should be paid to security of software installed on the client side. As mentioned before, the user can have minimum or no knowledge of computer operations. So the user can be use a computer infected with Trojan applicaton for a long time and never notice this. So when developing client application you must be ready for situation when client computer will be controlled by third parties. So if your application stores some data that might turn out to be important, such data should be encrypted.
User authentication is the keystone to security and it must be foolproof. Use of user's ID or account as a password or use of short passwords is inacceptable from security point of view. If authentication system is designed badly, it would be no problem for attacker to find password quickly. Weak authentication can nullify all security achieved with the help of cryptography. It's recommended to choose passwords longer than eight characters and use both numbers and letters. Of course it is not easy to remember such passwords especially if it is has no meaning. But this problem can be solved easily with the help of external password management applications. For example now it has become popular to keep passwords on USB-drives and Flash cards. You can place certificate or other useful information near the password list. You should know that there are special Smart Cards and USB Dongles for keeping X.509 certificates. Such devices can increase security but only certificates can be stored there, so they can not be used as password keepers. Keeping passwords on external card has several advantages as you can carry your passwords with you and in case of danger the medium can be relatively easily destroyed. You can use different password for each application or system you use. You can easily use long passwords so it will be hard for attacker to guess it or find using brute-force attack. So we can say that only the person who has device with passwords can access system, it allows to protect computer not only from outsider attacker but if someone will try to use it during owner's absence.
Multitier application architecture itself allows creation of one more barrier for protection from unauthorized access. You can restrict user access depending on the tasks he will fulfil. But that's not all. You can develop client modules in the way that operations performed by people with limited access are limited right in the application. For example there are bank branches where the set of operations performed by clerks is limited to one or two operations. In this case the client module should only be able to perform these operation. At the same time the manager of the branch can use advanced application version that allows database altering. Thus security can be increased additionally by application segmentation according to performed tasks.
You can use simple scheme to analyse potential gaps in your system security:
- analyse security of data storage and data transfer channels;
- check if there are times when data is not encrypted;
- if the data is not encrypted, check if they are freely accessible;
- if the is encrypted, check if the attacker can obtain something useable for recovery of the encryption keys
While this article describes security basics, it is enough to understand the level of modern security systems. This level is high enough to assure data protection from main attacks for reasonable time. Unfortunately many IT companies don't have even basic knowledge about security of distributed computer systems. As a result we have a lot of vulnerable and insecure systems. We hope this article will show you importance of security in modern life and will help you to create really secure applications. Care about clients' security and your effort will be rewarded.