People are often amazed that the U.S. Supreme Court considers a corporation to be a person for some legal purposes. In the election funding case Citizens United v. FEC and other matters, the High Court has held corporations are people as it has expanded corporate rights in recent years.
But what about computers? Can computers be people in the eyes of the law?
It’s more than an esoteric question for legal scholars. It can decide cases, and it does. In fact, a central consideration in a recent Arizona appellate decision, Stuebe v. Arizona, is whether a computer-generated video notification should be considered hearsay.
During a night in February 2018, law enforcement in Maricopa County, Arizona, responded to a 911 call from a security company. A silent alarm had been triggered at Zenjero Falls West, a Glendale, Arizona, commercial property featuring upscale office suites and Spanish architecture that had never been occupied, sitting vacant for years after foreclosure, a victim of Glendale’s bust after its boom.
Upon arrival at Zanjero Falls West, law enforcement saw two people running toward an SUV. When they stopped the SUV, Jerry Stuebe was a passenger in the vehicle. Going back over the SUV’s path, law enforcement discovered two large bags containing copper wire, bolt-cutters, and other burglary tools.
Just when you thought things couldn’t get any worse for Mr. Stuebe, they did.
Zanjero Falls West’s property manager testified he received an automated, computer-generated, email from the security company after a motion-sensor security camera was activated. A video file was attached to the email and the email specified the date and time that the video was recorded. Over Stuebe’s hearsay objection, the superior court admitted the email and the video in evidence.
A jury convicted Stuebe of third-degree burglary, a felony, and the trial court sentenced him to 10 years in prison. Stuebe appealed, arguing the trial court erred in admitting into evidence the email and attached video generated by the surveillance system.
Stuebe argued the email and video were inadmissible hearsay and that introducing the evidence violated his rights under the Confrontation Clause of the Sixth Amendment to the U.S. Constitution.
Hearsay Against the Machine?
Next to the Rule Against Perpetuities, the Hearsay Rule may be one of the most despised concepts in law school.
Stated simply, the rule bars the admission of out of court statements offered to prove the truth of the matter asserted.
For instance, If Chauncey testified Wadsworth told Chauncey he cheated on his wife—and Chauncey’s testimony was being introduced by Wadsworth’s wife’s lawyers in their divorce proceedings to prove Wadsworth committed adultery—the testimony would be inadmissible hearsay.
However, if Chauncey’s testimony were introduced at Wadsworth’s murder trial—to establish the alibi that Wadsworth was with Chauncey and not murdering Collinsworth because he was angry about Collinsworth’s NFL coverage—it would not be hearsay.
Because the evidence was not introduced to establish the truth of the matter asserted—that Wadsworth was cheating on his wife.
Applying this hypothetical to the legal dispute in Stuebe, the state offered the computer-generated email and video (the “statement”) for the truth of the matter asserted (that Stuebe burglarized Zanjero Falls West).
But what about that “statement” being made by a non-person machine?
Computers are People, Too?
Arizona’s hearsay rule, codified at Ariz. R. Evid. 801, follows the common law rule, providing that hearsay is:
A statement that: (1) the declarant does not make while testifying at the current trial or hearing; and (2) a party offers in evidence to prove the truth of the matter asserted in the statement.
Because, Ariz. R. Evid. 801(a) and (b), the rule against hearsay applies to “a person’s” statements and “the person who made the statement,” an important legal question was whether a machine that generates information may qualify as a “person” under the rules. Unfortunately for purposes of this discussion, the rules do not define “person.”
Because Arizona—as do many states—models its evidentiary rules on the Federal Rules of Evidence, courts turn to federal interpretation of evidentiary issues.
Federal courts have held generally that computer results are not hearsay. In United States v. Lizarraga-Tirado, which was cited by the Stuebe court, the 9th Circuit held GPS tracking did not constitute hearsay.
The Lizarraga-Tirado court observed that the satellite image itself, like a photograph, makes no assertion, so it’s not hearsay. However, the court conceded that the Google Earth GPS placing a tack at the coordinates presented a trickier question.
“Unlike a satellite image itself, labeled markers added to a satellite image do make clear assertions. Indeed, this is what makes them useful. For example, a dot labeled with the name of a town asserts that there’s a town where you see the dot. The label ‘Starbucks’ next to a building asserts that you’ll be able to get a Frappuccino there. In short, labeled markers on a satellite image assert that the labeled item exists at the location of the marker,” the Lizarraga-Tirado court said.
If the tack were placed manually, the court noted, it would be hearsay. However, the Lizarraga-Tirado court noted that Google Earth placed the tack automatically. Because the tack was placed automatically, it was not hearsay, the court held.
The Lizarraga-Tirado court conceded also that there were other evidentiary concerns, such as a malfunction or tampering, but no hearsay. The court joined several other courts in holding that machine statements are not hearsay.
Federal circuit courts of appeal concluding such “machine statements” by computers do not constituting hearsay include cases involving:
Telephone billing records in a case of an airline flight attendant who called in a bomb threat to avoid working a shift, United States v. Lamons, (11th Cir. 2008).
Machine-generated Medical laboratory results in the case of an appellant convicted of cocaine distribution, United States v. Moon, (7th Cir. 2008) and United States v. Washington, (4th Cir. 2007).
Fax Machine Header, Date, and Time Data in the case of a depositor convicted of bank fraud. Citing Mueller & Kilpatrick, Federal Evidence §380, the court held, “a statement is something uttered by a ‘person,’ so nothing ‘said’ by a machine is hearsay.”
Computer-generated Date and IP Address Data Headers in Photographs in the case of an appellant convicted of transporting obscene material in United States v. Hamilton, (10th Cir. 2005).
Why Computer Hearsay Matters
We’ve noted before that legal questions become more complicated when computers take on more tasks performed traditionally by humans. For instance, can a computer engage in the practice of law?
In bygone days, a private investigator would have recorded a video of the incident and delivered it to the property manager. You can bet that the private eye would be testifying, so there would be no hearsay.
However, today, we have computerized silent alarms that record video to email automatically. If the digital evidence is offered into evidence, it could certainly be offered to prove the truth of the matter asserted.
But is it a statement? Does it trigger the Sixth Amendment’s Confrontation Clause?
As Barry University Law School Professor Brian Sites noted in his Georgetown Law Technology Review article, Machines Ascendant: Robots and the Rules of Evidence, courts have developed the “machine-generated testimony doctrine,” holding:
“Machine-generated data does not trigger the Confrontation Clause because it is the machines—not the analysts operating them—that make the statements at issue, and machines are not ‘witnesses’ within the meaning of the Confrontation Clause.”
Using case law examples over DNA results, breathalyzer results, and urinalysis data, Barry notes, “Courts, for the most part, appear unconcerned with the rise in the number of ‘witnesses’ immune to cross-examination.”
Courts may have rejected these attempts to hold computers accountable as we would humans, but the danger is real.
As Professor Barry notes, computers are the creation of imperfect humans, which in turn, makes them imperfect. What happens when computers learn to lie? It’s a Brave New World.
David Horrigan is Discovery Counsel and Legal Education Director at Relativity. An attorney, journalist, and industry analyst, David has served as Analyst and Counsel at 451 Research, and he was Runner-Up for Best Legal Analysis in the 2019 LexBlog Excellence Awards.
Editor’s Note: Will English, The Daily News’ web developer, was a part of the immersive learning class mentioned in the article.
For three years, Ball State computer science students have worked to make their field available to the next generation of computer scientists.
Since fall 2017, Ball State computer science professor Dave Largent and his students have partnered with Northside Middle School, Burris Laboratory School and Muncie Central High School for an immersive learning program focused around teaching computer hardware and software.
The program, Largent said, encourages involvement in computer science by teaching both students and teachers alike, so teachers can continue the instruction after the project’s completion.
“Overall, the goal would be to get as many grade school and high school students exposed to computer science as possible,” Largent said.
The original idea for the project, Largent said, was based on the idea of bringing more diversity to computer science, which he called “very white male dominated.”
“As I’ve looked at possible solutions to [the lack of diversity], one of the solutions I realized was if we can educate or make young students aware of the possibility that they can be a computer scientist in the future, then that’s going to fill that pipeline more diversity,” Largent said.
The groups of Ball State students participating in the project met several problems during the semester, including the fact that some Muncie Community Schools teachers had little to no experience in computer science, said project member Corbin Creedon, senior computer science major. Some of the biggest complications in the process involved getting teachers to open up to the learning themselves.
“Something that immediately came up for specifically our group is how our teacher had no computer science background at all,” Creedon said. “He was hired in the summer, took a quick week lesson plan from Project Lead the Way (PLTW) to learn some stuff, and that was it.”
Largent’s students were also challenged to interact with the middle and high schoolers over the course of several meetings throughout the semester, during which they taught the students in both the physical construction of computers and the creation of software.
Junior computer science major Sara Bailey said her experience as a woman in the field of computer science was an important part of wanting to work with younger students.
“I also liked working with kids, and I thought, the idea of helping kids, and especially girls like younger girls get into computer science was exciting, because personally, I know that I didn’t have that much support in middle school,” Bailey said.
Bailey and her group taught the students for nine weeks. At the culmination of the course, the students were asked to assemble a final project that showcased their understanding of computer science.
For the classes near the end of the semester, that meant designing holiday decorations.
“For these nine weeks because of Thanksgiving and Christmas, [the final projects are] primarily holiday based, so one of them was [to] create a small light system to be seen in the dark, or something like that, or create a waving Santa through [an input],” Creedon said.
Going forward, Largent said he hopes to implement the project beyond the fall semester of each year.
In an immersive learning project composed primarily of computer science students, Largent said lowering the number of credits and adding a spring semester could make the project more appealing to students outside of the field.
“[Adding the spring semester] will provide continuous interactions with the schools throughout the year,” Largent said in an email. “With the reduced credit hours, I’m hopeful to include more students, and maybe even include some education majors as well, as it will be easier for them to include another small class into their schedule.”
Contact John Lynch with comments at email@example.com or on Twitter @WritesLynch.
With a week to go before final exams, ITI Technical College, a private Baton Rouge vocational college, is going back to paper, at least partially until its computer system is fully restored after being the latest Louisiana institution victimized by ransomware.
ITI Vice President Mark Worthy said Tuesday the college’s computer personnel were working to get all the servers in the system back up and are making progress. But in the meantime, since many on staff began before automation, they’re starting to go through the documents that backup the databases to ensure that grades are recorded and financial aid gets to the right people.
“Full functionality? Not sure when because of the complexity,” Worthy said. Some of the critical systems are coming back online. Classes for the 605 students are continuing. Communications, however, have been crippled, so administrators are visiting classrooms to convey information.
What’s taking time is that the technicians are reconnecting each server for computers used by students and administrators on the six-acre campus only after checking to ensure the code is clean.
Monday’s ransomware attack, which crippled about 10% of the state’s computer network servers just hours after votes were tallied in statewide …
Technicians traced the ransomware attack back to the Czech Republic. The attackers replicated an employee’s contact list and sent out emails to faculty and staff that looked like the real thing. The messages asked the reader to click on an expected report, which one of the employees did on Monday, Jan. 27. In the dark hours of the following Wednesday morning, the school’s IT administrator was checking the network, as she usually does, and found suspicious activity. She disconnected all the servers from the internet, then started looking for the impacted systems, Worthy said.
But the ransomware was able to encrypt some of the databases and keep the school from accessing their files. Eventually, the techs found a message to contact the attackers for instructions on how and how much to pay to regain access to the databases. “We won’t pay and we won’t contact these criminals,” he said.
Initially, Worthy offered to hire specialists to work on the problem. But his IT staff argued that they would be more familiar with the architecture of the system. Besides, the school teaches information technology and has faculty and staff able to handle the problem.
Unlike, the City of New Orleans or state government, both of which were hit by ransomware attackers, ITI is a privately owned college. State government’s teams and experts are not available to the school.
Gov. John Bel Edwards is expected to discuss cybersecurity Wednesday in a speech before the Louisiana Municipal Association, whose members include several localities hit with crippling cyber-attacks.
“We’re running this rodeo on our own,” Worthy said. “Fortunately, we teach IT, so we have a lot of really, really sharp people already on staff.”
Worthy said ITI would be contacting police and the FBI after the system is back up and the incriminating evidence is collected.
Similar ransomware attacks have previously crippled Louisiana state agencies, city governments, and school systems.
When the first signs of a massive cyberattack became apparent in the Tangipahoa Parish School System’s computers, administrators thought it wa…
Two days before commencement ceremonies, Baton Rouge Community College leaders learned that its computers were cyberattacked by ransomware.
In November roughly 1,500 of the state’s 30,000 computers were infected by cyber attackers. The hackers blocked access to the state’s data until a ransom was paid. The state refused to pay but had to shut down systems that disrupted state services, such as slowing delivery of food stamps, as well as closing the Office of Motor Vehicles for several weeks.
In December, the City of New Orleans shut down its computer systems while technicians cleaned the ransomware out of code and reloaded the information onto city computers.
State officials plan to re-open eight of its main Office of Motor Vehicles locations Monday, a week after a cyberattack crippled Louisiana sta…
Success! An email has been sent with a link to confirm list signup.
Error! There was an error processing your request.
Puget Sound — Washington’s inland sea — is a mysterious place. It’s the southern-most fjord in the lower 48 states. It’s fed by rivers that create shallow, mucky tideflats. In other spots it plunges more than 900 feet deep, giving it oceanic traits, but it doesn’t flow freely in and out of the Pacific Ocean. The main entrance and exit into the Sound is relatively narrow and shallow, creating a sort of bathtub that curtails the exchange of seawater and wildlife.
The Sound is facing serious challenges. The beloved local orcas are in alarming decline, the human population and its polluting cars, roadways and buildings is growing, and the damaging effects of climate change loom large.
But scientists are employing a sophisticated computer modeling tool to unravel some of the Puget Sound’s complex puzzles and trigger actions that can help safeguard the iconic Northwest waterway.
“We now are in a position where you can address some really important questions in Puget Sound,” said Joel Baker, director of the University of Washington’s Puget Sound Institute.
One of the more surprising and hopeful results comes from a recently published study on climate change. It predicts that the Sound could in many ways fare a bit better than the Pacific Ocean when considering the damaging effects of a warmer world.
The Salish Sea Model was built by scientists at the Seattle office of the Pacific Northwest National Laboratory (PNNL), part of the U.S. Department of Energy. PNNL program manager Tarang Khangaonkar launched the project in 2008 in partnership with the state Department of Ecology. Their goal was to create a model that’s widely useful and built in a collaborative, transparent process.
Scientists can use the model to test theories about how chemicals and creatures move through Puget Sound, tweaking different inputs to understand past and future events. The model has been used to find conditions favorable for native sixgill sharks, guide restoration in the Stillaguamish River delta, and study oyster reproduction.
We now are in a position where you can address some really important questions in Puget Sound.
When scientists tried to understand why some areas were harder hit with the dead zones, Khangaonkar said, “nobody could figure out why.”
Searching for the cause of suffocation
The model encompasses what’s known as the Salish Sea, which spans Puget Sound, the San Juan Islands, a strait running to the northwest tip of Washington and the waters off the east side of Vancouver Island. The researchers also included a stretch of offshore water that extends south along the Washington Coast, past the mouth of the Columbia River.
Early runs of the model could create low-oxygen conditions, but the hypoxia was everywhere, not just the observed hot spots in Hood Canal and other specific inlets and coves. The model included layers of data from multiple sources to create the tides, currents, weather, underwater geographic features, shorelines, water temperature, pH, and salinity. Ecology provided data on nutrients that flowed into Puget Sound from 99 sewage treatment plants, industrial outfalls and other points, plus 161 streams emptying into the sea.
But even with all of that information, the Salish Sea Model couldn’t recreate past conditions of hypoxia. Then researchers added data on the muddy, sandy bottom of Puget Sound. The model worked, revealing a key driver of hypoxia.
“Unless you take into account everything,” Khangaonkar said, “it’s not possible to guess at the reason.”
The scientists figured out that algae were reproducing in great blooms that eventually died, sank, and rotted in the sediment at the bottom of the sea. The decaying plants pulled oxygen out of the water. The result wasn’t necessarily intuitive at first. While alive, the algae released oxygen, as plants do, so they weren’t an obvious culprit for hypoxia.
That conclusion “led to quite a bit of debate,” Khangaonkar said.
But it also helped researchers think more strategically about which pollution sources need to be curbed to prevent them from essentially fertilizing the algae with nutrients. That includes sewage treatment plants, leaking shoreline septic systems, and lawn chemicals. The model highlighted the fact that Puget Sound is not well flushed by water from the ocean, trapping and recycling pollutants in the inland sea.
Officials with Ecology are using these results to update pollution regulations based on scientific research.
“This model is not a black box,” said Cristiana Figueroa-Kaminsky, a pollution and modeling manager for Ecology.
It’s based on open-source code with input from numerous agencies and academic institutions, she said.
The UW’s Baker agreed that it’s a robust model, and added that the university also has the LiveOcean model that can make limited forecasts addressing different issues in the Sound and Pacific.
“They’re as good as any models in the world,” Baker said.
‘Without the numbers you fear’
With the success of the oxygen-level work, Khangaonkar and his team were ready to tackle a bigger question: What will happen to Puget Sound as the planet keeps warming?
The researchers decided to gaze decades ahead to 2095. They added information from a national model and ran the simulation using a trajectory that assumes humankind follows a worst-case scenario path and does little to reduce global warming pollution.
Again, the model generated some surprising predictions.
Puget Sound’s water conditions are greatly impacted by the melting snowpack of surrounding mountains. That water flows from rivers, flushing the inland sea. Warmer weather is shrinking the annual snowpack and reducing its spring and summer runoff. Experts feared that the circulation of the Sound will be disrupted.
“If in the future the flushing strength were to go down, it would lead to catastrophic failure of our ecosystem,” Khangaonkar said.
Because Puget Sound is a relatively small body of water, one might expect it would fare worse than the Pacific Ocean. But the model, pulling together effects of sea level rise, changes in salinity and other factors, predicted a future where the water in Puget Sound’s deep basins would continue circulating, churning the water. That would keep it cooler, less acidic and more oxygenated than the Pacific.
“Climate change brings in a lot of counterintuitive findings,” Baker said. Flooding, however, is another concern.
Khangaonkar and his team published their climate change results in May in a scientific journal.
“Without the numbers you fear… what is it going do to us?” he said. The model gives a glimpse. “Rather than speculate, you can just run it out and get the answer.”
Solving a toxic riddle
For roughly two decades, scientists Jim West and Sandie O’Neill have been sampling Puget Sound wildlife, tracking the amount of pollution they carry. A main focus has been PCBs, a family of long-lasting industrial chemicals banned 40 years ago. Since then, millions of dollars have been spent scrubbing them from Puget Sound.
And yet they’re still here.
PCBs, or polychlorinated biphenyls, show up in resident wildlife, including Pacific herring, Chinook salmon, harbor seals and orcas. What’s particularly weird about the PCBs is that their levels are holding steady or even increasing in some marine creatures, while other pollutants are declining. Although the concentrations of the PCBs in the sediment and water are so low they’re sometimes undetectable, they’re much higher in the fish, seals and whales. The math doesn’t add up.
“Something is happening where the PCBs are getting into the environment and an awful lot of them are ending up in the pelagic [or marine] food web,” said O’Neill, who works with West at the Washington Department of Fish and Wildlife.
The chemicals can disrupt the growth of Chinook salmon, the local orcas’ favorite food, and are believed to threaten the killer whales directly by harming their immune systems and ability to reproduce.
One of the main theories of how toxics get into the marine food web is that chemicals settle into the sediment, get consumed by microscopic organisms, and move their way up the food chain.
But it seems that something else is happening in Puget Sound.
It appears that upland sources of PCBs found in sources such as industrial caulk, electrical transformers, and contaminated soils are still being washed into the sea. West and O’Neill suspect that some of the PCBs are getting sucked into the food chain straight from the water before they even settle into the mud.
There are a couple of ways the PCBs could move from the open water into marine life. The chemicals are lipophilic, meaning they love to stick to fats, which includes the outside of bacteria and algae. The PCBs can also get sucked up by microscopic zooplankton floating in the water column.
As those tiny organisms are eaten by small fish that are eaten by bigger fish that are eaten by marine mammals, the PCBs move through the food chain to larger predators. Their levels build as the toxics are stored in body fat, and mothers can pass PCBs to their babies through their milk. When the animals die and decay, the PCBs are recycled back into the food chain via smaller creatures.
While the hypothesis makes sense, scientists need more data to prove it. They’re eager to pinpoint the pollution sources and pathways of movement in order to close the PCB tap. And for local orcas, whose population has sunk from highs in the 200s to just 73 animals, time is running out.
When Khangaonkar suggested a collaboration, West and O’Neill jumped at the chance. They now have results for the first phase of their research, which included work with UW scientists, and are starting another study correlating the model with pollutants in plankton.
The Salish Sea Model has the potential to “inform us about where the PCBs are coming into the food web, then you can do something about them,” O’Neill said. It could identify hot spots for cleanup that could most benefit marine life. “You can’t clean up the whole of the Puget Sound basin,” she added. “It’s too much.”
It’s just the kind of project that Khangaonkar gets excited about.
“We have developed this [model] for everybody to be able to use,” he said. “And when folks are interested in using it, there is a strong commitment to actually work with them and make it happen.”
Editor’s Note: Funding for GeekWire’s Impact Series is provided by the Singh Family Foundation in support of public service journalism. GeekWire editors and reporters operate independently and maintain full editorial control over the content.
Source: National Cyber Security – Produced By Gregory Evans This week a report of hackers gaining access to an Indian nuclear power plant’s computer network led to alarm, confusion, and denial before officials admitted the hack took place. The threat analyst who reported the issue experienced a unique vantage point in the middle of that […]
View full post on AmIHackerProof.com
Mining cryptocurrency used to require thousands of dollars worth of equipment to see any kind of meaningful return, but not anymore. Newer digital currencies like Monero, ByteCoin, and AEON have given would-be miners the ability to mine tokens right from their laptops. This might benefit small-time miners that want to get involved in the sector, but for every good thing online there are always people that figure out a way to use it for bad.
Hackers have begun using these tools to infect computers and websites to secretly mine cryptocurrencies. This emerging type of malware attack has been dubbed as “cryptojacking”, and it could cause your computer to overheat and crash. Luckily, spotting these hidden miners isn’t all that difficult.
Cryptojacking essentially hijacks your computer’s CPU power to mine. This means when you’re browsing the web, the malware is running in the background completely unbeknownst to you. There are a few types of this malware, and some run only when you visit a certain website and others can be maliciously installed on your computer. The best way to prevent this is by using antivirus software and adblockers.
If you’ve already been hit with this kind of malware, you’ll notice either your computer acting sluggish, getting warmer than usual, or its fan constantly spinning. If you aren’t running any kind of demanding software, like video games or video editing programs, this should be the first hint that your computer is working overtime.
If you’ve noticed your laptop acting up, it’s time to go check on what’s going on under the hood. Mac users can view a detailed breakdown of everything their computer is running by searching “Activity Monitor” and using the magnifying glass icon at the top-right of the screen. Windows users can simply hold down the Ctrl-Alt-Del keys to bring up “Task Manager.”
Both of these menus will display a graph of how much of your computer’s processing power is being used. Any massive spikes should be red flags. You’ll also see an ordered list of the programs using the most processing power at the moment. Before ending any of these programs be sure to research what they are, as you could be ending a crucial part of your operating system.
Both Tesla and the Los Angeles Times have had their sites infected by cryptojacking software. Companies with popular websites are the most at risk, as hackers can embed code onto their servers and use the CPU power of everyone who visits the site. But making it a habit to check on how your computer is running will ensure your device isn’t getting used to make someone else a crypto fortune.
Computer hackers are getting more sophisticated. They are not afraid to hold cities, states, and companies’ hostage until they pay a ransom. Hackers are modern day tech pirates that disrupt computer programs and turn shareholders into anxiety-ridden puppets. Computer networks in Denver, Atlanta, and Baltimore, as well as a computer network of Boeing Airlines, are recent victims. Atlanta’s computers went down on March 22nd when a hacker locked important data behind an encrypted wall. The wall would stay in place, according to the hackers, until the city pays the hackers $51,000 in Bitcoins. Atlanta has a week to comply. If the city doesn’t pay, all that important data will vanish, according to the computer pirates. No one is sure if Atlanta paid the money, according to a Fox News report. But Mayor Keisha Lance Bottoms didn’t rule out payment.
The hacking group calls itself “SamSam.” SamSam is not new to the hacking world. The group pocketed more than $800,000 in 2017. The city of Leeds, Atlanta paid SamSam $12,000 in February 2018 to release their data. But Atlanta is not the only city that SamSam has in its hacking sights this month. Officials in Baltimore said their 911 dispatch system was under attack. The system was down for 17 hours recently to prove the hackers were serious. The hackers were able to get into the system after the city made an internal change to their firewall. But the Baltimore hackers didn’t ask for money, and that is concerning, according to Frank Johnson, Baltimore’s chief information officer.
Boeing, the world’s top aerospace company, is also under attack by the now famous WannaCry ransomware. WannaCry is the same ransomware that crippled Britain’s healthcare services in 2017. The Boeing attack is not as serious as the attack in Britain, according to Boeing’s head of communications Linda Mills. Mills also said the 777 jet program was not part of the hack. Mills said only a few company machines were under attack.
Denver also had a suspicious outage when denvergov.org and pocketgov.org, as well as other online services, suddenly stopped in March. Some city staffers lost access to their email account. Denver officials claim the shutdown was the work of a computer bug, but Colorado’s Department of Transportation was a SamSam victim in February. The hackers said the information would come back to them if Colorado paid in Bitcoins, according to a news report by Denver7.
June 1 – 3, 2018| Ceske Budejovice, Czech Republic
Cybersecurity Conference Description
PURPOSES OF THE CONFERENCE:
1. Providing podium for scientists to present the results of their researches and scientific results in the field of advanced computer information technologies.
2. Motivation for scientific activity.
3. Exchange of progressive ideas and research results.
4. Development of creativity in scientific activity.
A growing number of computer security thinkers, including myself, think that in the very near future, most computer security will be machine versus machine–good bots versus bad bots, completely automated. We are almost there now.
Fortunately or unfortunately, I don’t think we’ll get to a purely automated defense for a long, long time.
Today’s security defenses
Much of our computer security defenses are already completely automated. Our operating systems are more securely configured out of the box, from firmware startup to the operating system running apps in secure hardware-enforced virtual boundaries, than ever before. If left alone in their default state, our operating systems will auto-update themselves to minimize any known vulnerabilities that have been addressed by the OS vendor.
Most operating systems come with rudimentary blacklists of “bad apps” and “bad digital certificates” that they will not run and always-on firewalls with a nice set of “deny-by-default” rules. Each OS either contains a built-in, self-updating, antimalware program or the users or administrators install one as one of the first administrative tasks they perform. When a new malware program is released, most antimalware programs get a signature update within 24 hours.
Most enterprises are running or subscribing to event log message management services (e.