UK’s Information Commissioner’s Office (ICO) Slap Fines on Facebook and Equifax
UK fines Facebook £500,000 for failing to protect user data
Facebook was fined £500,000 by the UK’s Information Commissioner’s Office (ICO) for its role in the Cambridge Analytica data scandal which allowed unauthorized access of 87 million user information without sufficient consent.
The fine forced by the ICO was calculated using the UK’s old Data Protection Act 1998 which can impose a maximum penalty of £500,000 which represents a small fee for a company that brought in $40.7bn (£31.5bn) in global revenue in 2017. The penalty could have been much larger had it fallen under EU’s General Data Protection Regulation (GDPR), where a company could face a maximum fine of 20 million euros or 4% of its annual global revenue, whichever is higher, for such a privacy breach.
The investigation found that Facebook failed to keep the personal information of its users secure by failing to make suitable checks on developers using its platform.
Equifax recently faced a similar fine of £500,000 from its massive data breach that exposed personal and financial data of hundreds of millions of its customers.
Cisco WebEx Meetings Server XML External Entity (CVE-2018-18895)
DESCRIPTION
Cisco Webex Meetings Server includes a version of Castor XML that is affected by XXE. Because of that Cisco WebEx Meetings Server prior to versions 2.8MR3 and 3.0MR2 patch 1 are affected from XXE vulnerability. A remote unauthenticated attacker may lead to the disclosure of confidential data, denial of service, server side request forgery, port scanning from the perspective of the machine where the parser is located, and other system impacts by using this vulnerability.
Vulnerable path: /WBXServixe/XMLService
Vulnerable parametre: siteName
SOLUTION
Update current Cisco WebEx Meetings Server to 2.8MR3, 3.0MR2 patch 1, or the upcoming 4.0 release.
REFERENCES
You can find Cisco’s announcement from the link below:https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvm56811
You can find more information about XXE from the link below:https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Processing
Castor XML fixed this issue with CVE-2014-3004.
CREDIT
Alphan Yavas from Biznet Bilisim A.S.
U-Boot verified boot bypass vulnerabilities (CVE-2018-18439, CVE-2018-18440)
Security advisory: U-Boot verified boot bypass
==============================================
The Universal Boot Loader - U-Boot [1] verified boot feature allows
cryptographic authentication of signed kernel images, before their execution.
This feature is essential in maintaining a full chain of trust on systems which
are secure booted by means of an hardware anchor.
Multiple techniques have been identified that allow to execute arbitrary code,
within a running U-Boot instance, by means of externally provided
unauthenticated data.
All such techniques spawn from the lack of memory allocation protection within
the U-Boot architecture, which results in several means of providing
excessively large images during the boot process.
Some implementers might find the following issues as an intrinsic
characteristic of the U-Boot memory model, and consequently a mere aspect of
correct U-Boot configuration and command restrictions.
However in our opinion the inability of U-Boot to protect itself when loading
binaries is an unexpected result of non trivial understanding, particularly
important to emphasize in trusted boot scenarios.
This advisory details two specific techniques that allow to exploit U-Boot lack
of memory allocation restrictions, with the most severe case also detailing a
workaround to mitigate the issue.
It must be emphasized that cases detailed in the next sections only represent
two possible occurrences of such architectural limitation, other U-Boot image
loading functions are extremely likely to suffer from the same validation
issues.
To a certain extent the identified issues are similar to one of the findings
reported as CVE-2018-1000205 [2], however they concern different functions
which in some cases are at a lower level, therefore earlier in the boot image
loading stage.
Again all such issues are a symptom of the same core architectural limitation,
being the lack of memory allocation constraints for received images.
It is highly recommended, for implementers of trusted boot schemes, to review
use of all U-Boot booting/loading commands, and not merely the two specific
ones involved in the findings below, to apply limitations (where
applicable/possible) to the size of loaded images in relation to the available
RAM.
It should also be emphasized that any trusted boot scheme must also rely on an
appropriate lockdown of all possibilities for interactive consoles, by boot
process interruption or failure, to ever be prompted.
U-Boot insufficient boundary checks in filesystem image load
------------------------------------------------------------
The U-Boot bootloader supports kernel loading from a variety of filesystem
formats, through the `load` command or its filesystem specific equivalents
(e.g. `ext2load`, `ext4load`, `fatload`, etc.)
These commands do not protect system memory from being overwritten when loading
files of a length that exceeds the boundaries of the relocated U-Boot memory
region, filled with the loaded file starting from the passed `addr` variable.
Therefore an excessively large boot image, saved on the filesystem, can be
crafted to overwrite all U-Boot static and runtime memory segments, and in
general all device addressable memory starting from the `addr` load address
argument.
The memory overwrite can directly lead to arbitrary code execution, fully
controlled by the contents of the loaded image.
When verified boot is implemented, the issue allows to bypass its intended
validation as the memory overwrite happens before any validation can take
place.
The following example illustrates the issue, triggered with a 129MB file on a
machine with 128MB or RAM:
```
U-Boot 2018.09-rc1 (Oct 10 2018 - 10:52:54 +0200)
DRAM: 128 MiB
Flash: 128 MiB
MMC: MMC: 0
# print memory information
=> bdinfo
arch_number = 0x000008E0
boot_params = 0x60002000
DRAM bank = 0x00000000
-> start = 0x60000000
-> size = 0x08000000
DRAM bank = 0x00000001
-> start = 0x80000000
-> size = 0x00000004
eth0name = smc911x-0
ethaddr = 52:54:00:12:34:56
current eth = smc911x-0
ip_addr = <NULL>
baudrate = 38400 bps
TLB addr = 0x67FF0000
relocaddr = 0x67F96000
reloc off = 0x07796000
irq_sp = 0x67EF5EE0
sp start = 0x67EF5ED0
# load large file
=> ext2load mmc 0 0x60000000 fitimage.itb
# In this specific example U-Boot falls in an infinite loop, results vary
# depending on the test case and filesystem/device driver used. A debugging
# session demonstrates memory being overwritten:
(gdb) p gd
$28 = (volatile gd_t *) 0x67ef5ef8
(gdb) p *gd
$27 = {bd = 0x7f7f7f7f, flags = 2139062143, baudrate = 2139062143, ... }
(gdb) x/300x 0x67ef5ef8
0x67ef5ef8: 0x7f7f7f7f 0x7f7f7f7f 0x7f7f7f7f 0x7f7f7f7f
```
It can be seen that memory address belonging to U-Boot data segments, in this
specific case the global data structure `gd`, is overwritten with payload
originating from `fitimage.itb` (filled with `0x7f7f7f7f`).
### Impact
Arbitrary code execution can be achieved within a U-Boot instance by means of
unauthenticated binary images, loaded through the `load` command or its
filesystem specific equivalents.
It should be emphasized that all load commands are likely to be affected by the
same underlying root cause of this vulnerability.
### Workaround
The optional `bytes` argument can be passed to all load commands to restrict
the maximum size of the retrieved data.
The issue can be therefore mitigated by passing a `bytes` argument with a value
consistent with the U-Boot memory regions mapping and size.
U-Boot insufficient boundary checks in network image boot
---------------------------------------------------------
The U-Boot bootloader supports kernel loading from a variety of network
sources, such as TFTP via the `tftpboot` command.
This command does not protect system memory from being overwritten when loading
files of a length that exceeds the boundaries of the relocated U-Boot memory
region, filled with the loaded file starting from the passed `loadAddr`
variable.
Therefore an excessively large boot image, served over TFTP, can be crafted to
overwrite all U-Boot static and runtime memory segments, and in general all
device addressable memory starting from the `loadAddr` load address argument.
The memory overwrite can directly lead to arbitrary code execution, fully
controlled by the contents of the loaded image.
When verified boot is implemented, the issue allows to bypass its intended
validation as the memory overwrite happens before any validation can take
place.
The issue can be exploited by several means:
- An excessively large crafted boot image file is parsed by the
`tftp_handler` function which lacks any size checks, allowing the memory
overwrite.
- A malicious server can manipulate TFTP packet sequence numbers to store
downloaded file chunks at arbitrary memory locations, given that the
sequence number is directly used by the `tftp_handler` function to calculate
the destination address for downloaded file chunks.
Additionally the `store_block` function, used to store downloaded file
chunks in memory, when invoked by `tftp_handler` with a `tftp_cur_block`
value of 0, triggers an unchecked integer underflow.
This allows to potentially erase memory located before the `loadAddr` when
a packet is sent with a null, following at least one valid packet.
The following example illustrates the issue, triggered with a 129MB file on a
machine with 128MB or RAM:
```
U-Boot 2018.09-rc1 (Oct 10 2018 - 10:52:54 +0200)
DRAM: 128 MiB
Flash: 128 MiB
MMC: MMC: 0
# print memory information
=> bdinfo
arch_number = 0x000008E0
boot_params = 0x60002000
DRAM bank = 0x00000000
-> start = 0x60000000
-> size = 0x08000000
DRAM bank = 0x00000001
-> start = 0x80000000
-> size = 0x00000004
eth0name = smc911x-0
ethaddr = 52:54:00:12:34:56
current eth = smc911x-0
ip_addr = <NULL>
baudrate = 38400 bps
TLB addr = 0x67FF0000
relocaddr = 0x67F96000
reloc off = 0x07796000
irq_sp = 0x67EF5EE0
sp start = 0x67EF5ED0
# configure environment
=> setenv loadaddr 0x60000000
=> dhcp
smc911x: MAC 52:54:00:12:34:56
smc911x: detected LAN9118 controller
smc911x: phy initialized
smc911x: MAC 52:54:00:12:34:56
BOOTP broadcast 1
DHCP client bound to address 10.0.0.20 (1022 ms)
Using smc911x-0 device
TFTP from server 10.0.0.1; our IP address is 10.0.0.20
Filename 'fitimage.bin'.
Load address: 0x60000000
Loading: #################################################################
...
####################################
R00=7f7f7f7f R01=67fedf6e R02=00000000 R03=7f7f7f7f
R04=7f7f7f7f R05=7f7f7f7f R06=7f7f7f7f R07=7f7f7f7f
R08=7f7f7f7f R09=7f7f7f7f R10=0000d677 R11=67fef670
R12=00000000 R13=67ef5cd0 R14=02427f7f R15=7f7f7f7e
PSR=400001f3 -Z-- T S svc32
```
It can be seen that the program counter (PC, r15) is set to an address
originating from `fitimage.itb` (filled with `0x7f7f7f7f`), as the result of
the U-Boot memory overwrite.
### Impact
Arbitrary code execution can be achieved within a U-Boot instance by means of
unauthenticated binary images, passed through TFTP and loaded through the
`tftpboot` command, or by a malicious TFTP server capable of sending arbitrary
response packets.
It should be emphasized that all network boot commands are likely to be
affected by the same underlying root cause of this vulnerability.
### Workaround
The `tftpboot` command lacks any optional argument to restrict the maximum size
of downloaded images, therefore the only workaround at this time is to avoid
using this command on environments that require trusted boot.
Affected version
----------------
All released U-Boot versions, at the time of this advisory release, are
believed to be vulnerable.
All tests have been performed against U-Boot version 2018.09-rc1.
Credit
------
Vulnerabilities discovered and reported by the Inverse Path team at F-Secure,
in collaboration with Quarkslab.
CVE
---
CVE-2018-18440: U-Boot insufficient boundary checks in filesystem image load
CVE-2018-18439: U-Boot insufficient boundary checks in network image boot
Timeline
--------
2018-10-05: network boot finding identified during internal security audit
by Inverse Path team at F-Secure in collaboration with Quarkslab.
2018-10-10: filesystem load finding identified during internal security audit
by Inverse Path team at F-Secure.
2018-10-12: vulnerability reported by Inverse Path team at F-Secure to U-Boot
core maintainer and Google security, embargo set to 2018-11-02.
2018-10-16: Google closes ticket reporting that ChromeOS is not affected due
to their specific environment customizations.
2018-10-17: CVE IDs requested to MITRE and assigned.
2018-11-02: advisory release.
References
----------
[1] https://www.denx.de/wiki/U-Boot
[2] https://lists.denx.de/pipermail/u-boot/2018-June/330487.html
Permalink
---------
https://github.com/inversepath/usbarmory/blob/master/software/secure_boot/Security_Advisory-Ref_IPVR2018-0001.txt
New PortSmash Side-Channel Vulnerability (CVE-2018-5407)
A new vulnerability being called PortSmash, (CVE-2018-5407) has been discovered impacting all CPUs that use a Simultaneous Multithreading (SMT) architecture. SMT is a technology that allows multiple computing threads to be executed simultaneously on a CPU core.
PortSmash is being classified as a side-channel attack which is technique used for leaking encrypted data from a computer’s memory or CPU, that will also record and analyze discrepancies in operation times, power consumption, electromagnetic leaks, or even sound to gain additional info that may help break encryption algorithms and recovering the CPU’s processed data.
An example on how the attack may work:
A malicious process next to legitimate processes using SMT’s parallel thread running capabilities. The malicious PortSmash process than leaks small amounts of data from the legitimate process, helping an attacker reconstruct the encrypted data processed inside the legitimate process.
The team that discovered the vulnerability published a proof-of-concept (PoC) code on GitHub that demonstrates a PortSmash attack on Intel Skylake and Kaby Lake CPUs.
To rectify the issue, organizations are urged to install an Intel provided patch that has been released prior to the PortSmash proof-of-concept being released or to disable SMT/Hyper-Threading in the CPU chip’s BIOS until you are able to install the security patches.
PortSmash has joined the list of newly discovered side-channel vulnerabilities such as TLBleed, Meltdown, Foreshadow and Spectre.
Eurostar Customers Reset Passwords After Security Breach
Eurostar forced all of its customers to reset their passwords after indications of a possible security breach by hackers attempted to access user accounts. In an email, customers were notified that a threat actor may have used automated attempts to login with stolen user email and passwords that were obtained in an unknown method. The email also stated, “We’ve since carried out an investigation which shows that your account was logged into between the 15 and 19 October, If you didn’t log in during this period, there’s a possibility your account was accessed by this unauthorized attempt.”
The email instructed customers to reset their passwords and check their accounts for unusual activity while ensuring them their payment card information hadn’t been compromised.
Please review our Best Practices for Creating a Password post when creating a new password.
Continuous Monitoring : Academic Paper
The Federal Information Security Act (FISMA) of 2002 requires that government agencies report on their Information Technology Security Status annually to the Office of Management and Budget (OMB). Under current FISMA guidelines, any system owner within a government agency is required to complete the certification and accreditation (C&A) process. The process requires that security controls and policies for all subsystems within the environment be implemented including: host based hardening, Host Based Security Systems (HBSS), installing firewalls, Intrusion Protections Systems (IPS). Once the security systems are deployed and technical security controls are in place, typically an outside independent organization will validate the security controls, through a risk assessment process. Once the process is complete the information is reviewed and the agency will decide to grant the system an Authorization to Operate (ATO). Under new guidelines all systems are required to monitor the baselines security controls and document any changes to the system by implementing a continuous monitoring program. A continuous monitoring plan should be implemented to assess the risk to the environment based on changes to the system. Currently, there are a number of organizations that make recommendations for implementing a continuous monitoring program but differ on the definition and implementation. The implementation of a continuous monitoring program can be made if an organization uses common sense in conjunction with the recommendations contained in current National Institute of Standards and Technology, SANs and Department of Home Land Security.
Continuous Monitoring is the on-going assessment of change and related risk to the baseline configuration of security authorized operational IT systems within the enterprise. The goal of a Continuous Monitoring program is to determine if built in system security controls continue to be effective over time. The proper balance of policy, context, processes and technology application dictates the overall effectiveness of the program. A number of government organizations have developed standards and recommendations for developing a continuous monitoring strategy. The National Institute of Standards and Technology (NIST) Special Publication 800-137 rev 1 ” Information Security Continuous Monitoring for Federal Information Systems and Organizations” presents guidelines for applying NIST’s Risk Management Framework (RMF) to Federal Systems. In NIST’s Special Publication NIST defines continuous monitoring as: ” Information security continuous monitoring (ISCM) is defined as maintaining ongoing awareness of the information security, vulnerabilities and threats to support organizational risk management”.
NIST in conjunction with the Department of Home Land Security (DHS) developed NIST Interagency Report 7756 “CAESARS Framework Extension: An Enterprise Continuous Monitoring Technical Reference Model (Second Draft) “which extends the original CAEARS Reference Architecture that describes standard protocols and systems to produce an automated continuous monitoring system. Currently, hardware and software tools using a standard set a protocols to monitor all assets within the enterprise have not been currently developed. DHS plans to award contacts totaling more than 6 billion to a number of companies to develop and implement continuous diagnostic and mitigation tools. In addition, the SANs Institute working with the Department of Defense (DOD) published ” Twenty Critical Security Controls for Effective Cyber Defense” which makes a number of recommendations on the implementation of continuous monitoring. Even though complete monitoring tools sets have not been fully develop, organizations can use SAN’s recommend controls to begin to implementing a continuous monitoring program to provide situational awareness of the enterprise. Within the recommended 20 controls, three controls, Critical Control, 4, 14, and 16 make recommendations to implement continuous monitoring capabilities across an enterprise networks. Critical Control 4 provides recommendations on vulnerability scanning and remediation. Critical Control 14 discusses the importance of auditing within the enterprise. Lastly, Critical Control 16 discusses account monitoring and control.
Unfortunately, many organizations fail to monitor the security controls for changes that may affect the security posture of the system. Once security configuration baselines are applied to systems, little is done to update the controls based on system changes. Implementing vulnerability scanning and compliance tools is an easy way protect the enterprise against known threats.
Vulnerability scanning tools incorporate two different scanning mechanisms, compliance scans and vulnerability scans to protect the enterprise. Compliance scans checks the systems against a known set of configuration security baselines or set of policies used for system hardening, such as those published by Defense Information Systems Agency (DISA) and the Center for Internet Security (CIS). The compliance scans should run against all system on the network to maintain ATO compliance and detect if any unauthorized changes were made to circumvent security. On the other hand, vulnerability scans checks the system against known set of threat signatures. The vulnerability scans will list known threats based on Common Vulnerability Alerts (CVE), vendor patch updates on common operating systems and application software. In addition, vulnerability scanning tools can provide network discovery scans to check for unauthorized devices that may be connected to the network. The discovery scans can be used to maintain the organization Configuration Management policies. SANs Critical Control 4 states ” run automated vulnerability scanning tools against all systems on the network on a weekly or more frequent basis and deliver prioritized lists of the most critical vulnerabilities to each responsible system administrator along with risk scores that compare the effectiveness of system administrators and departments in reducing risk. Where feasible, vulnerability scanning should occur on a daily basis using an up-to-date vulnerability-scanning tool. Any vulnerability identified should be remediated in a timely manner, with critical vulnerabilities fixed within 48 hours”.
Vulnerability scanning should be incorporated into any organization’s security plan and should be the first step in the implementation of a continuous monitoring program. Many scanning tools are available in the commercial market, such as Tenable’s NESSUS vulnerability scanner and eEye Digital’s Retina vulnerability scanner.
Audit logs are one of the most important security controls to implement when developing security policies within the enterprise. Audit logs provide a wealth of information on the daily activities of authorized system users and some cases unauthorized users as well. Almost every piece of equipment incorporated in building a IT infrastructure provides audit logging capability. Unfortunately, many organizations do not correctly implement audit logging policies when developing a System Security Plan. An organization’s audit policy may include the requirement to enable audit logging, but does not specify which logs are enabled, time period for review, retention time or how the logs will be consolidated offline for protection. SANs Critical Control 14 states ” Deficiencies in security logging and analysis allow attackers to hide their location, malicious software used for remote control, and activities on victim machines. Even if the victims know that their systems have been compromised, without protected and complete logging records they are blind to the details of the attack and to subsequent actions taken by the attackers. Without solid audit logs, an attack may go unnoticed indefinitely and the particular damages done may be irreversible”.
Information Systems are under constant threats and attacks occurring from outside or inside the organization. Incorporating a strong audit logging capability and policies will help to detect unauthorized users, configuration changes, information for forensic investigations and system performance monitoring. Organizations should have audit logging enabled on network equipment for successful and failed logons, logoffs, account lockout, user account and password management, policy changes, object access and installed/un-installed applications at a minimum. Audit logs should be reviewed daily for any suspicious activity and retained off line for a minimum of 1 year. Reviewing audit logs can be very difficult, if not impossible to access each device on the network to review logs individually. Organizations should incorporate tools to consolidate all device logs into a single location for review. This prevents internal threats and outside attackers from deleting audit logs to cover their tracks from malicious activity. The system should have the capability to send alerts to security personnel for certain events in real time, either by email or Short Message Service (SMS).
Audit log consolidation should be considered the second step in
the implementation of a continuous monitoring program. The enabling of audit logs on devices and consolidating the logs to central device is one of the best ways to detect threats and provide situational awareness of the enterprise. The audit logs can provide insight to what is considered normal activity and what is not. Many tools for audit consolidation are available from companies like, GFI Software’s, GFI Events Manager, and Splunk that can ingest any type of ANSI based text file and then search for any data tag associated with a source event.
One of the most frequently targets of hackers are user accounts, default accounts, service accounts and inactive accounts. Hackers will target default accounts that have not been disabled with dictionary attacks and when exploited are difficult to detect. Even though most organizations have security policies on managing account access, poor oversight by management fails to strictly enforce policy. The uses of service accounts to access systems are all too common which makes correlating specific users with access very difficult. If an attacker just discovers a valid User ID than they have half of the puzzle to hack the account. If an attacker gains access to a system with an active user account they usually can find a way to gain access to an administrator level and exploit the entire system. Therefore, account monitoring policies should be reviewed on a regular basis and incorporated into the organization’s continuous monitoring program.
The implementation of regular monitoring of account access is one of the easiest ways to mitigate risk to the entire system. Account management should be incorporated into the daily operations of every System Administrator that has account creating authority. First, password requirements should be enabled on all systems, require passwords with 14 characters in length and include upper, lower and special characters. Account passwords must be changed after 60 days and inactive account disabled after 30 days. In addition, default system accounts should be disabled and renamed including the default administrator account. Accounts for terminated employees should be disabled immediately, all too often those accounts are left active making exploit by a disgruntled employee effortless. Moreover, System Administrators that leave and/or terminated should have their account disabled before leaving the building and the system closely monitored for any unauthorized activity. Lastly, all active accounts should be fully reviewed on a regular basis for employees that were transferred to new positions outside the division.
Incorporating account monitoring into a continuous monitoring program is a quick and effective means to mitigate risk to the system. In addition, the cost associated with implementing account monitoring is minimal, since it mostly entails an increase in security awareness and policy enforcement.
The number of attacks increase daily and the job of defending the system becomes more difficult especially when defending against zero day vulnerabilities. Organizations tend to apply system security with the least amount of cost as possible. However, with the ever increasing regulatory requirements organizations must find cost effective ways to protect and increase the situational awareness of their networks. In order to meet mandated requirements, organizations can implement a cost effective continuous monitoring program by conducting regular compliance and vulnerability scans, consolidate audit reporting and maintain a comprehensive account management policy to maintain the security posture of their enterprise.





