One in four don’t lock their door, Somfy survey reveals. #Somfy #Security #SmartHome

Smart Home specialist Somfy has revealed that some of the most basic security measures are not being met by UK residents, leaving their homes extremely vulnerable to opportunist burglars.

The research – carried out among 2,008 UK adults through a Censuswide survey* – revealed that, on average, 1 in 4 never lock their front door when they go out, despite nearly 1 in 5 (17%) having experienced a burglary, and 1 in 3 of those having been burgled more than once. Alarmingly, 4% of people never lock anything when they go out.

The survey also revealed the most security-conscious regions, the UK’s riskiest and safest areas as perceived by residents, as well as how neighbours would react if they witnessed a burglary. Highlights include:

  • Londoners are the least security-conscious with home security – An overwhelming 33% never lock their front door when they go out and 48% never lock their back door, despite it being the riskiest region for burglaries – double the UK average.  The South West is the most responsible region with (83%) locking their front doors when leaving the house.
  • Glasgow had the highest rate of have-a-go heroes with 23% saying they would confront a burglar if they saw a neighbour’s house being burgled  On average, 17% of those across the UK would confront a burglar at a neighbour’s house, with one citing they would ‘use a baseball bat’ to intimidate the intruder, exposing themselves to danger as well as potential legal issues.

 

  • The majority (83%) would call the police if they witnessed a neighbourhood burglary – However, some 3% would do absolutely nothing if they witnessed a burglary in a nearby house, with one even stating, “it would depend on which neighbour’s house it was,” as to whether they would help.

 

  • The South West has the most proactive neighbours – Only 1% of residents wouldn’t take any form of action when witnessing a break-in. The North East had the highest percentage of neighbours who would turn a blind eye, with 6% saying they wouldn’t do anything.

 

  • Only 60% of those living in Greater London feel safe, followed closely by those living in the City of London with 63%  The North West was also cited as an area where some residents felt vulnerable, with only 58% of Liverpudlians saying they felt safe, as well as only 64% of Mancunians. The North East closely followed with only 2 in 3 feeling safe (68%).

 

  • The South West and Scotland are considered the safest area to live by locals, with the majority (80%) feeling totally safe in their neighbourhood – Bristol (79%) and Plymouth (84%) rank amongst the top three safest cities, but Edinburgh was crowned as the safest city overall, with an overwhelming 88% agreeing that they feel safe and secure where they live.

Commenting on the findings, Steven Montgomery, Managing Director of Somfy UK and Ireland, said: “We were extremely surprised that people aren’t being more vigilant when it comes to home security, especially given some have had first-hand experience of burglaries.  We were also shocked that there wasn’t a correlation between the areas perceived unsafe and the efforts people made with security – if you don’t feel safe, you should at the very least lock your doors.”

Of those surveyed, 58% of UK residents have adopted Smart Home products, with the most common being outdoor lighting (40%), followed by Smart cameras (25%), Smart alarms (19%) and finally, Smart locks (15%). More than half (53%) of those using Smart Home security products said they feel safer as a direct result.

Steven added: “It was also interesting that nearly half of UK residents don’t have any form of Smart Home security. The biggest advantage of including smart home devices and security features is that they can prevent break-ins from occurring altogether. Burglars generally look for easy targets and unprotected homes. If they sense that you have a modern security system, most burglars won’t even bother targeting your property.”

When asked what people perceived as the best Smart Home security products on the market, external security cameras came out on top with 30%, followed by Smart alarms (20%), Smart locks (10%) and finally, internal security cameras and Smart lighting both at 9%.

*The research was conducted by Censuswide, with 2,008 respondents aged 16+ in GB between 13.12.19 – 17.12.19. The survey was conducted from a random sample of UK adults. Censuswide abide by and employ members of the Market Research Society which is based on the ESOMAR principles. 

Review – Secure data 1TB SSD secure drive. James Bond tech. #data #securedata #drives

I have covered several drives here in the past and to be fair some of these are quite the kit with James Bond ringing about them and this one is no different but it has a trick up its sleeve compared to others when it comes to the tech aspect. Most of what I have covered uses keypads this one does not infact there is little to it at all you just get a cable in the box there is a couple of LEDs on top and then there is the bluetooth unlocking which is excellent and works every time without fail and I tried and tested on several phones just to be sure.

When it comes to drives like this people ask about the FIPs level ( Federal Information Processing Standards) each year they are getting higher with more security and encryption levels making them harder to break into and in some case even physically break at all. Currently there is 4 levels of FIPs and 5 is not too far away by the looks of things You can read up more on this HERE

The setup process is painless here which is great all you need is an app which is on Android and iOS power in the unit to your PC and then pair them up and away you go you have the options to command what needs doing after this. You will see how to in the video review. You also get DriveSecurity Antivirus powered by ESET – 1 Year License which is great..This does not require software which is an added bonus and file transfer is quick and fast and no lag or issues here..

Features:

  • FIPS 140-2 Level 3 Validated, AES 256-bit Hardware Encryption
  • Wireless Unlock via Mobile App. 2 Factor Authentication via Text
  • No software, OS Independent (Works on and with any OS)
  • Unique Password Recovery features via Text
  • Remote Data Wipe if device lost or stolen
  • Read Only Mode, Device Step Away AutoLock
  • Malware Protection: Preloaded with USB Antivirus

Would I recommend? Well yes but first you will have people complain about the price but this is the norm today you pay for storage no matter what tech you buy just like smartphones but with all that is on offer here on top and how easy it is to use. This model costs €505 euro and goes up to €3500 depending on storage..

BUY HERE 

 

 

 

 

#PokemonGO has one big flaw regarding personal information. #iOS #Android #Security #App

Just in case you live under a rock, Pokémon Go has been the trending topic the last few days, simply because it has surpassed  the consumer base of allegedly “bigger” companies.

And more than that, simply because its Pokémon! Maybe our dear Jim is not a big fan of the series, but I personally was when I was younger.

Been one of the few games that uses Augmented Reality or (AR) to play and that alone is a huge progress. The team behind the GO part of it, is no other than Niantic, the team that brought to us INGRESS.

Thus far everything sound great about the game, but recently, something really disturbing has happened.

When you first open the app, as in every other app out there, you need to log in with credentials. Within the app there’s just two options, or accessing through a Pokémon account (which is impossible  if you don’t have one already) or with Google’s.

When you use the Google account (which is what we all probably do on Android phones) the app generally show a pop up widget after letting you know which features will be accessing. But when you do this on iOS “nothing happens“.

The blogger Adam Reeve, found behavior rather weird and went to his Google page to see what he had given access to. The answer make all his alarms to ring off:

 Pokemon Go has Full Access to your Google account

When you grant “full access” to an app, it can see and modify practically all information on your Google account. For you to have an idea, this is what the Google security page say:

  • Read all your email
  • Send email as you
  • Access all your Google drive documents (including deleting them)
  • Look at your search history and your Maps navigation history
  • Access any private photos you may store in Google Photos

This is something that shouldn’t happen as what an app would need to use in the case you forgot your password or something is, in ay case your name and email address.

Apparently this is not consistent across the APK, as has happened just to some iOS or Android users. But if you want to see if you’re one of them you can do it HERE

Apparently, Niantic has acknowledge the case and made a statement about the incident:

We recently discovered that the Pokémon Go account creation process on iOS erroneously requests full access permission for the user’s Google account. However, Pokémon Go only accesses basic Google profile information (specifically, your user ID and e-mail address) and no other Google account information is or has been accessed or collected. Once we became aware of this error, we began working on a client-side fix to request permission for only basic Google account information, in line with the data we actually access. Google has verified that no other information has been received or accessed by Pokémon Go or Niantic. Google will soon reduce Pokémon Go’s permission to only the basic profile data that Pokémon Go needs, and users do not need to take any actions themselves.

So, although it seems to be nothing to worry about, it would  be better if you keep an eye on your Pokémon Go access ’till everything is settled down.

Also remember that, apart from this issue, there’s a ton of fake apps that carry a malware which would harm your device.

If you don’t live in a country where is fully available at the moment find a secure and known source to get the APK from.

Source, VIA

Would you buy a $17.000 smartphone? This, is @SIRINLABS “Solarin”. #Android #Security

Sirin labs is a Israeli startup that’s aiming to play the same game as Vertu by offering luxury smartphones with outstanding security suites.

This is the case of this $16.000 (aprox. £9,500) smartphone, that promises “experience out of this world”.

The phone’s look is specially design to create a mixture between been nice looking and resistant.

Fist of all let’s take a look at some of the internals:

Display: 5.5 inches, IPS

Resolution: 2560×1440 pixels

OS: Android 5.1 Lollipop

Chipset: Qualcomm Snapdragon 810

GPU:  Adreno 506

Memory: 128 GB (No SD) / 4 GB RAM

Camera: rear 23,8MP, laser/phase detection autofocus, four-tone
flash; front 8 MP, front facing flash

Battery: Li-Ion 4000 mAh

At first glance its nothing we have not seen before (for twenty times less money hahaha) but what makes this device “unique” is the security suite, which is said to meet the high military standards called Zimperium, which protects the smartphone against advanced devices attacks, applications and network.

Also comes with Advanced Encryption Standard (AES) 256-bit chip-to-chip. technology used at army grade by its simplicity, all you need to do, is move the switch located on the back of the device al your ready to go.

The devices itself comes with a one of a kind design, in multiples colours, but as you change the colours, you change the price, for you to have an idea this would be the price tag of each:

Colors:
Fire Black Carbon Leather with Titanium $ 13.800 + taxes,
Fire Black Carbon Leather with Diamond like Carbon $ 14.900 + taxes,
Fire Black Carbon Leather with Yellow Gold $ 17.400 + taxes,
Crystal White Carbon Leather with Diamond like Carbon $ 15.900 + taxes.

So, what do you think about this? Do you think it is the most secure smartphone? Would you pay the price for been “annonymous”? Let us know in the comment section below, or join the twitter conversation.

Source

 

Nest launches HD Security Camera Nest Cam by @Tiwaash #Tech #Nest #NestCam

With innovations in technology everyday there comes some cool stuffs and Nest, the smart homeware firm has collaborated technology to provide security. Well read on to know more.

The Google-owned Nest has launched a camera that senses movement in a user’s home and alerts them via a smartphone app. The camera will have a night vision mode and customers will be able to pay for their videos to be archived for up to 30 days.

The camera is built upon the foundations of earlier Dropcam designs. It will be capable of streaming video live and customers will be able to pay subscription fees of £8 and £24 per month to keep the video for 10 and 30 days, respectively. The Nest Cam, will capture in high definition 1080p and will be available from the beginning of July in the UK, Republic of Ireland, France, Netherlands and Belgium.

Nest Protect Updated:

The existing Nest Protect product, which comprises a smoke and carbon monoxide (CO) detector, has been updated and will alert the user if it senses danger, the firm said. The Learning Thermostat, for which Nest is perhaps best known, will also issue alerts if temperatures drop to dangerous levels. If Nest Protect detects smoke, it will also communicate with the thermostat, as well as its owner, he added.
Each of the products will be integrated with the firm’s 5.0 smartphone app.

About Nest

Nest was set up by two former Apple executives: Mr Fadell, who is known as the “father of the iPod”, and Matt Rogers. It was bought by Google for $3.2bn (£2bn) in January 2014. Announcing the acquisition, Google said that Nest would maintain its own distinct identity under Mr Fadell’s continued stewardship.

The Nest Cam can be yours at $ 199 and can be ordered from here.

Stay tuned for more tech buzz.

#FakeID flaw in Android leaves millions of phones vulnerable since 2010. #Android #Security #JTB

If you are an Android user you need to read this..

Bluebox reports this today..

 

Every Android application has its own unique identity, typically inherited from the corporate developer’s identity. The Bluebox Security research team, Bluebox Labs, recently discovered a new vulnerability in Android, which allows these identities to be copied and used for nefarious purposes.

 

Dubbed “Fake ID,” the vulnerability allows malicious applications to impersonate specially recognized trusted applications without any user notification. This can result in a wide spectrum of consequences. For example, the vulnerability can be used by malware to escape the normal application sandbox and take one or more malicious actions: insert a Trojan horse into an application by impersonating Adobe Systems; gain access to NFC financial and payment data by impersonating Google Wallet; or take full management control of the entire device by impersonating 3LM.

Implications:

This is a widespread vulnerability dating back to the January 2010 release of Android 2.1 and affecting all devices that are not patched for Google bug 13678484, disclosed to Google and released for patching in April 2014. All devices prior to Android 4.4 (“KitKat”) are vulnerable to the Adobe System webview plugin privilege escalation, which allows a malicious application to inject Trojan horse code (in the form of a webview plugin) into other apps, which leads to taking control of the entire app, all of the apps’s data, and being able to do anything the app is allowed to do. Android 4.4 is specifically immune due to a change in the webview component (the switch from webkit to Chromium moved away from the vulnerable Adobe-centric plugin code).

Users of devices from specific vendors that include device administration extensions are at risk for a partial or full device compromise by malware. The 3LM device extensions (temporarily owned by Motorola and Google) are present in various HTC, Pantech, Sharp, Sony Ericsson, and Motorola devices – and are susceptible to the vulnerability as well.

Other devices and applications that depend upon the presence of specific signatures to authenticate an application may also be vulnerable. Essentially anything that relies on verified signature chains of an Android application is undermined by this vulnerability.

How it works:

Android applications are typically cryptographically signed by a single identity, via the use of a PKI identity certificate. The use of identity certificates to sign and verify data is commonplace on the Internet, particularly for HTTPS/SSL use in web browsers. As part of the PKI standard, an identity certificate can have a relationship with another identity certificate: a parent certificate (“issuer”) can be used to verify the child certificate. Again, this is how HTTPS/SSL works – a specific web site SSL certificate may be issued by a certificate authority such as Symantec/Verisign. The web site SSL certificate will be “issued” by Verisign, and Verisign’s digital identity certificate will be included with the website certificate. Effectively, the web browser trusts any certificate issued by Verisign through cryptographic proof that a web site SSL certificate was issued by Verisign.

Android applications use the same certificate signature concepts as SSL, including full support for certificates that are issued by other issuing parties (commonly referred to as a “certificate chain”). On an Android system, the digital certificate(s) used to sign an Android application become the application’s literal package “signature”, which is accessible to other applications via normal application meta-data APIs (such as those in PackageManager).

Application signatures play an important role in the Android security model. An application’s signature establishes who can update the application, what applications can share it’s data, etc. Certain permissions, used to gate access to functionality, are only usable by applications that have the same signature as the permission creator. More interestingly, very specific signatures are given special privileges in certain cases. For example, an application bearing the signature (i.e. the digital certificate identity) of Adobe Systems is allowed to act as a webview plugin of all other applications, presumably to support the Adobe Flash plugin.  In another example, the application with the signature specified by the device’s nfc_access.xml file (usually the signature of the Google Wallet application) is allowed to access the NFC SE hardware. Both of these special signature privileges are hard coded into the Android base code (AOSP). On specific devices, applications with the signature of the device manufacture, or trusted third parties, are allowed to access the vendor-specific device administration (MDM) extensions that allow for silent management, configuration, and control of the device.

Overall, this is an appropriate use of digital signatures in a system that supports the notion of PKI digital certificate identities. However, Bluebox Labs discovered a vulnerability that has been relatively present in all Android versions since Android 2.1, which undermines the validity of the signature system and breaks the PKI fundamental operation. The Android package installer makes no attempt to verify the authenticity of a certificate chain; in other words, an identity can claim to be issued by another identity, and the Android cryptographic code will not verify the claim (normally done by verifying the issuer signature of the child certificate against the public certificate of the issuer). For example, an attacker can create a new digital identity certificate, forge a claim that the identity certificate was issued by Adobe Systems, and sign an application with a certificate chain that contains a malicious identity certificate and the Adobe Systems certificate. Upon installation, the Android package installer will not verify the claim of the malicious identity certificate, and create a package signature that contains the both certificates. This, in turn, tricks the certificate-checking code in the webview plugin manager (who explicitly checks the chain for the Adobe certificate) and allows the application to be granted the special webview plugin privilege given to Adobe Systems – leading to a sandbox escape and insertion of malicious code, in the form of a webview plugin, into other applications.

The problem is further compounded by the fact that multiple signers can sign an Android application (as long as each signer signs all the same application pieces). This allows a hacker to create a single malicious application that carries multiple fake identities at once, taking advantage of multiple signature verification privilege opportunities to escape the sandbox, access NFC hardware used in secure payments, and take device administrative control without any prompt or notification provide to the user of the device.

For the PKI & code savvy, you can see for yourself in the createChain() and findCert() functions of the AOSP JarUtils class – there is a conspicuous absence of cryptographic verification of any issuer cert claims, instead defaulting to simple subjectDN to issuerDN string matching.  An example of the Adobe Systems hardcoded certificate is in the AOSP webkit PluginManager class.

You can download the Bluebox app below,click the image below.

 

SOURCE

You can also follow bluebox on Twitter for updates