[ / / / / / / / / / / / / / ] [ r8k / ck / wooo / fit / random / fit / jewess / pone / qhaos / r8k / warroom / x ]

/midnightriders/ - QR Midnight Riders

Dig, Meme, Pray.. WIN!

Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


MIDNIGHT RIDERS

We Are Q


Q's Board: /projectdcomms/ | Bakers Board: /Comms/ | Legacy Boards: /CBTS/ /TheStorm/ /GreatAwakening/

File: a0d82412a23af43⋯.png (222.88 KB,492x298,246:149,Screenshot_2024_07_27_at_2….png)

cb0e92 No.196622

This is a bread on FaceBook and it's evil rise to power, what, why, and when of the greatest spy tool ever put together against humanity.

Civil Liberties, US Constitution & Bill of Rights were protections that kept the Government/Deep State from being able to track us all.. PNAC wanted a "New Pearl Harbor" (9/11) to FORCE the "PATRIOT ACT" that would allow the government to finally be able to legally spy on US Citizens. A side bonus was the influx of money that a war on terrorism would provide, a boogieman enemy with no face, and a loose fitting name and definition, and most importantly "SUSPICION" on anyone [THEY] wanted to unconstitutional track & spy on "LEGALLY".

A lot of what I will share in this bread is PayWalled or gone, and this is just what a night or so of digging netted me. This is a central hub of sorts when it comes to spying/ cryptography and the Global Cabal/Deep State.

I will try to put some type of order to this, but BUT.. it is a dump of information and not an essay in chronological order.. FYI:

You will realize quickly that Mark Fuckerberg is nothing more than a figurehead, and NO HE DID NOT come up with Facebook, but was the "patsy" or gave reason and a storyline for its existence.

That said. Here we go:

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.196623

File: c630b63c2f277bf⋯.png (449.05 KB,750x2524,375:1262,Screenshot_2024_07_27_at_2….png)

File: 0689d07e20bac16⋯.png (502.39 KB,716x2598,358:1299,Screenshot_2024_07_27_at_2….png)

https://web.archive.org/web/20240131135909/https://www.nytimes.com/2002/11/09/us/threats-responses-intelligence-pentagon-plans-computer-system-that-would-peek.html?searchResultPosition=1

THREATS AND RESPONSES: INTELLIGENCE; Pentagon Plans a Computer System That Would Peek at Personal Data of Americans (NYTimes WAYBACK link)

By John Markoff

Nov. 9, 2002

The Pentagon is constructing a computer system that could create a vast electronic dragnet, searching for personal information as part of the hunt for terrorists around the globe – including the United States.

As the director of the effort, Vice Adm. John M. Poindexter, has described the system in Pentagon documents and in speeches, it will provide intelligence analysts and law enforcement officials with instant access to information from Internet mail and calling records to credit card and banking transactions and travel documents, without a search warrant.

Historically, military and intelligence agencies have not been permitted to spy on Americans without extraordinary legal authorization. But Admiral Poindexter, the former national security adviser in the Reagan administration, has argued that the government needs broad new powers to process, store and mine billions of minute details of electronic life in the United States.

Admiral Poindexter, who has described the plan in public documents and speeches but declined to be interviewed, has said that the government needs to break down the stovepipes that separate commercial and government databases, allowing teams of intelligence agency analysts to hunt for hidden patterns of activity with powerful computers.

We must become much more efficient and more clever in the ways we find new sources of data, mine information from the new and old, generate information, make it available for analysis, convert it to knowledge, and create actionable options, he said in a speech in California earlier this year.

Admiral Poindexter quietly returned to the government in January to take charge of the Office of Information Awareness at the Defense Advanced Research Projects Agency, known as Darpa. The office is responsible for developing new surveillance technologies in the wake of the Sept. 11 attacks.

In order to deploy such a system, known as Total Information Awareness, new legislation would be needed, some of which has been proposed by the Bush administration in the Homeland Security Act that is now before Congress. That legislation would amend the Privacy Act of 1974, which was intended to limit what government agencies could do with private information.

The possibility that the system might be deployed domestically to let intelligence officials look into commercial transactions worries civil liberties proponents.

This could be the perfect storm for civil liberties in America, said Marc Rotenberg, director of the Electronic Privacy Information Center in Washington The vehicle is the Homeland Security Act, the technology is Darpa and the agency is the F.B.I. The outcome is a system of national surveillance of the American public.

Secretary of Defense Donald H. Rumsfeld has been briefed on the project by Admiral Poindexter and the two had a lunch to discuss it, according to a Pentagon spokesman.

As part of our development process, we hope to coordinate with a variety of organizations, to include the law enforcement community, a Pentagon spokeswoman said.

An F.B.I. official, who spoke on the condition that he not be identified, said the bureau had had preliminary discussions with the Pentagon about the project but that no final decision had been made about what information the F.B.I. might add to the system.

A spokesman for the White House Office of Homeland Security, Gordon Johndroe, said officials in the office were not familiar with the computer project and he declined to discuss concerns raised by the project's critics without knowing more about it.

He referred all questions to the Defense Department, where officials said they could not address civil liberties concerns because they too were not familiar enough with the project.

Some members of a panel of computer scientists and policy experts who were asked by the Pentagon to review the privacy implications this summer said terrorists might find ways to avoid detection and that the system might be easily abused.

A lot of my colleagues are uncomfortable about this and worry about the potential uses that this technology might be put, if not by this administration then by a future one, said Barbara Simon, a computer scientist who is past president of the Association of Computing Machinery. Once you've got it in place you can't control it.

Other technology policy experts dispute that assessment and support Admiral Poindexter's position that linking of databases is necessary to track potential enemies operating inside the United States.

They're conceptualizing the problem in the way we've suggested it needs to be understood, said Philip Zelikow, a historian who is executive director of the Markle Foundation task force on National Security in the Information Age. They have a pretty good vision of the need to make the tradeoffs in favor of more sharing and openness.

On Wednesday morning, the panel reported its findings to Dr. Tony Tether, the director of the defense research agency, urging development of technologies to protect privacy as well as surveillance, according to several people who attended the meeting.

If deployed, civil libertarians argue, the computer system would rapidly bring a surveillance state. They assert that potential terrorists would soon learn how to avoid detection in any case.

The new system will rely on a set of computer-based pattern recognition techniques known as data mining, a set of statistical techniques used by scientists as well as by marketers searching for potential customers.

The system would permit a team of intelligence analysts to gather and view information from databases, pursue links between individuals and groups, respond to automatic alerts, and share information efficiently, all from their individual computers.

The project calls for the development of a prototype based on test data that would be deployed at the Army Intelligence and Security Command at Fort Belvoir, Va. Officials would not say when the system would be put into operation.

The system is one of a number of projects now under way inside the government to lash together both commercial and government data to hunt for patterns of terrorist activities.

What we are doing is developing technologies and a prototype system to revolutionize the ability of the United States to detect, classify and identify foreign terrorists, and decipher their plans, and thereby enable the U.S. to take timely action to successfully pre-empt and defeat terrorist acts, said Jan Walker, the spokeswoman for the defense research agency.

Before taking the position at the Pentagon, Admiral Poindexter, who was convicted in 1990 for his role in the Iran-contra affair, had worked as a contractor on one of the projects he now controls. Admiral Poindexter's conviction was reversed in 1991 by a federal appeals court because he had been granted immunity for his testimony before Congress about the case.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.196624

File: cc14538745ebc19⋯.png (452.44 KB,896x2534,64:181,Screenshot_2024_07_27_at_2….png)

File: 605677d270f642e⋯.png (541.44 KB,774x2730,129:455,Screenshot_2024_07_27_at_2….png)

File: 450754a59c21f0c⋯.png (548.09 KB,820x2714,410:1357,Screenshot_2024_07_27_at_2….png)

File: 0ea7c4ed5f6187e⋯.png (494.99 KB,746x2568,373:1284,Screenshot_2024_07_27_at_2….png)

https://web.archive.org/web/20240129144218/https://www.nytimes.com/2002/12/10/us/america-under-surveillance-privacy-security-new-tools-for-domestic-spying-qualms.html?searchResultPosition=12

AMERICA UNDER SURVEILLANCE: Privacy and Security; New Tools for Domestic Spying, and Qualms(NYTimes Wayback Link)

By Michael Moss and Ford Fessenden

Dec. 10, 2002

See the article in its original context from December 10, 2002, Section A, Page 1Buy Reprints

VIEW ON TIMESMACHINE

TimesMachine is an exclusive benefit for home delivery and digital subscribers.

When the Federal Bureau of Investigation grew concerned this spring that terrorists might attack using scuba gear, it set out to identify every person who had taken diving lessons in the previous three years.

Hundreds of dive shops and organizations gladly turned over their records, giving agents contact information for several million people.

It certainly made sense to help them out, said Alison Matherly, marketing manager for the National Association of Underwater Instructors Worldwide. We're all in this together.

But just as the effort was wrapping up in July, the F.B.I. ran into a two-man revolt. The owners of the Reef Seekers Dive Company in Beverly Hills, Calif., balked at turning over the records of their clients, who include Tom Cruise and Tommy Lee Jones – even when officials came back with a subpoena asking for any and all documents and other records relating to all noncertified divers and referrals from July 1, 1999, through July 16, 2002.

Faced with defending the request before a judge, the prosecutor handling the matter notified Reef Seekers' lawyer that he was withdrawing the subpoena. The company's records stayed put.

We're just a small business trying to make a living, and I do not relish the idea of standing up against the F.B.I., said Ken Kurtis, one of the owners of Reef Seekers. But I think somebody's got to do it.

In this case, the government took a tiny step back. But across the country, sometimes to the dismay of civil libertarians, law enforcement officials are maneuvering to seize the information-gathering weapons they say they desperately need to thwart terrorist attacks.

From New York City to Seattle, police officials are looking to do away with rules that block them from spying on people and groups without evidence that a crime has been committed. They say these rules, forced on them in the 1970's and 80's to halt abuses, now prevent them from infiltrating mosques and other settings where terrorists might plot.

At the same time, federal and local police agencies are looking for systematic, high-tech ways to root out terrorists before they strike. In a sense, the scuba dragnet was cumbersome, old-fashioned police work, albeit on a vast scale. Now officials are hatching elaborate plans for dumping gigabytes of delicate information into big computers, where it would be blended with public records and stirred with sophisticated software.

In recent days, federal law enforcement officials have spoken ambitiously and often about their plans to remake the F.B.I. as a domestic counterterrorism agency. But the spy story has been unfolding, quietly and sometimes haltingly, for more than a year now, since the attacks on the World Trade Center and the Pentagon.

Some people in law enforcement remain unconvinced that all these new tools are needed, and some experts are skeptical that high-tech data mining will bring much of value to light.

Still, civil libertarians increasingly worry about how law enforcement might wield its new powers. They say the nation is putting at risk the very thing it is fighting for: the personal freedoms and rights embodied in the Constitution. Moreover, they say, authorities with powerful technology will inevitably blunder, as became evident in October when an audit revealed that the Navy had lost nearly two dozen computers authorized to process classified information.

What perhaps angers the privacy advocates most is that so much of this revolution in police work is taking place in secret, said Cindy Cohn, legal director of the Electronic Frontier Foundation, which represented Reef Seekers.

If we are going to decide as a country that because of our worry about terrorism that we are willing to give up our basic privacy, we need an open and full debate on whether we want to make such a fundamental change, Ms. Cohn said.

But some intelligence experts say that in a changed world, the game is already up for those who would value civil liberties over the war on terrorism. It's the end of a nice, comfortable set of assumptions that allowed us to keep ourselves protected from some kinds of intrusions, said Stewart A. Baker, the National Security Agency's general counsel under President Bill Clinton.

Tearing Down a Wall

The most aggressive effort to give local police departments unfettered spying powers is taking place in New York City.

It was there 22 years ago that the police, stung by revelations of widespread abuse, agreed to stop spying on people not suspected of a crime. The agreement was part of a containment wall of laws, regulations, court decisions and ordinances erected federally and in many parts of the country in the 70's and 80's.

The F.B.I.'s spying authority was restricted, and the United States' foreign intelligence agencies got out of the business of domestic spying altogether. States passed their own laws. On the local level, ordinances and consent decrees were enacted not just in New York but also in Los Angeles, Chicago, San Francisco and Seattle. In the years since, these strictures have become part of the culture, Mr. Baker said.

But the wall is under attack. Last month, a special appeals court ruled that the sweeping antiterrorism legislation known as the U.S.A. Patriot Act, enacted shortly after the September 2001 attacks to give the government expanded terror-fighting capacity, freed federal prosecutors to seek wiretap and surveillance authority in the absence of criminal activity. In Chicago last year, a federal appeals court threw out the agreement that restricted police surveillance. Some officials in Seattle would like to follow suit, saying they are effectively sidelined in the terrorism war.

In New York, the Police Department has sued in federal court in Manhattan to end the consent decree the department signed in 1980 to end a civil rights lawsuit over the infiltration of political groups.

Attorney General John Ashcroft and New York's police commissioner, Raymond W. Kelly, say the wall is a relic – unnecessary and, worse, dangerous. David Cohen, the former deputy director of central intelligence who is now the Police Department's deputy commissioner for intelligence, argues that the consent decree's requirement of a suspicion of criminal activity prevents officers from infiltrating mosques.

In the last decade, we have seen how the mosque and Islamic institutes have been used to shield the work of terrorists from law enforcement scrutiny by taking advantage of restrictions on the investigation of First Amendment activity, Mr. Cohen said in an affidavit.

The police in other cities cite the same need. We're prohibited from collecting things that will make us a safer city, said Lt. Ron Leavell, commander of the criminal intelligence division of the Seattle police.

Mr. Cohen did not argue in his affidavit that the authorities, if unshackled, could have prevented the Sept. 11 attacks. But he did suggest that the F.B.I.'s failure to dig more deeply into the information it had before the attacks turned on agents' fears that they could not climb the wall.

The recent disclosure that F.B.I. field agents were blocked from pursuing an investigation of Zacarias Moussaoui because officials in Washington did not believe there was sufficient evidence of criminal activity to support a warrant points out how one person's judgment in applying an imprecise test may result in the costly loss of critical intelligence, Mr. Cohen said.

Mr. Cohen has also asked that his testimony before the federal court be given in secret, unheard even by opposing lawyers. Last week, a judge told New York City that it needed to present better arguments to justify such extraordinary secrecy.

Civil libertarians, frustrated that they cannot draw the other side into a debate, argue that questions about the need for such expanded powers are critical, and far from answered. Who said you have to destroy a village in order to save it? asked Jethro Eisenstein, one of the lawyers who negotiated the original consent decree. We're protecting freedom and democracy, but unfortunately freedom and democracy have to be sacrificed.

Even the police are far from unanimous about how intrusive they must be. The Chicago police, who have been free from their consent decree for nearly two years, say they have yet to use the new power. The Los Angeles police have made no effort to change their guidelines.

I have not heard complaints that the antiterrorist division has been inhibited in its work, said Joe Gunn, executive director of the Los Angeles Police Commission.

A joint Congressional inquiry into intelligence failures before Sept. 11 concluded that the failures had less to do with the inability of authorities to gather information than with their inability to analyze, understand, share and act on it.

The lesson of Moussaoui was that F.B.I. headquarters was telling the field office the wrong advice, said Eleanor Hill, staff director of the inquiry. Fixing what happened in this case is not inconsistent with preserving civil liberties.

'It Smacks of Big Brother'

The Congressional inquiry's lingering criticism has added impetus to a movement within government to equip terror fighters with better computer technology. If humans missed the clues, the reasoning goes, perhaps a computer will not.

Clearly, the F.B.I. is operating in the dark ages of technology. For instance, when agents in San Diego want to check out new leads, they walk across the street to the Joint Terrorism Task Force offices, where suspect names must be run through two dozen federal and local databases.

Using filters from the Navy's space warfare project, Spawar, the agents are now dumping all that data into one big computer so that with one mouse click they can find everything from traffic fines to immigration law violations. A test run is expected early next year. Similar efforts to consolidate and share information are under way in Baltimore; Seattle; St. Louis; Portland, Ore.; and Norfolk, Va.

It smacks of Big Brother, and I understand people's concern, said William D. Gore, a special agent in charge at the San Diego office. But somehow I'd rather have the F.B.I. have access to this data than some telemarketer who is intent on ripping you off.

Civil libertarians worry that centralized data will be more susceptible to theft. But they are scared even more by the next step officials want to take: mining that data to divine the next terrorist strike.

The Defense Department has embarked on a five-year effort to create a superprogram called Total Information Awareness, led by Adm. John M. Poindexter, who was national security adviser in the Reagan administration. But as soon as next year, the new Transportation Security Administration hopes to begin using a more sophisticated system of profiling airline passengers to identify high-risk fliers. The system in place on Sept. 11, 2001, flagged only a handful of unusual behaviors, like buying one-way tickets with cash.

Like Admiral Poindexter, the transportation agency is drawing from companies that help private industry better market their products. Among them is the Acxiom Corporation of Little Rock, Ark., whose tool, Personicx, sorts consumers into 70 categories like Group 16M, or Aging Upscale based on an array of financial data and behavioral factors.

Experts on consumer profiling say law enforcement officials face two big problems. Some commercial databases have high error rates, and so little is known about terrorists that it could be very difficult to distinguish them from other people.

The idea that data mining of some vast collection of databases of consumer activity is going to deliver usable alerts of terrorist activities is sheer credulity on a massive scale, said Jason Catlett of the Junkbusters Corporation, a privacy advocacy business. The data mining companies, Mr. Catlett added, are mostly selling good old-fashioned snake oil.

Libraries and Scuba Schools

As it waits for the future, the F.B.I. is being pressed to gather and share much more intelligence, and that has left some potential informants uneasy and confused about their legal rights and obligations.

Just how far the F.B.I. has gone is not clear. The Justice Department told a House panel in June that it had used its new antiterrorism powers in 40 instances to share terror information from grand jury investigations with other government authorities. It said it had twice handed over terror leads from wiretaps.

But that was as far as Justice officials were willing to go, declining to answer publicly most of the committee's questions about terror-related inquiries. Civil libertarians have sued under the Freedom of Information Act to get the withheld information, including how often prosecutors have used Section 215 of the 2001 antiterror law to require bookstores or librarians to turn over patron records.

The secrecy enshrouding the counterterrorism campaign runs so deep that Section 215 makes it a crime for people merely to divulge whether the F.B.I. has demanded their records, deepening the mystery – and the uneasiness among groups that could be required to turn over information they had considered private.

I've been on panel discussions since the Patriot Act, and I don't think I've been to one without someone willing to stand up and say, 'Isn't the F.B.I. checking up on everything we do?' said John A. Danaher III, deputy United States attorney in Connecticut.

Several weeks ago, the F.B.I. in Connecticut took the unusual step of revealing information about an investigation to dispute a newspaper report that it had bugged the Hartford Public Library's computers.

Michael J. Wolf, the special agent in charge, said the agency had taken only information from the hard drive of a computer at the library that had been used to hack into a California business. The computer was never removed from the library, nor was any software installed on this or any other computer in the Hartford Public Library by the F.B.I. to monitor computer use, Mr. Wolf said in a letter to The Hartford Courant, which retracted its report.

Nevertheless, Connecticut librarians have been in an uproar over the possibility that their computers with Internet access would be monitored without their being able to say anything. They have considered posting signs warning patrons that the F.B.I. could be snooping on their keystrokes.

I want people to know under what legal provisions they are living, said Louise Blalock, the chief librarian in Hartford.

In Fairfield, the town librarian, Tom Geoffino, turned over computer log-in sheets to the F.B.I. last January after information emerged that some of the Sept. 11 hijackers had visited the area, but he said he would demand a court order before turning over anything else. Agents have not been back asking for more, Mr. Geoffino said.

We're not just librarians, we're Americans, and we want to see the people who did this caught, he said. But we also have a role in protecting the institution and the attitudes people have about it.

The F.B.I.'s interest in scuba divers began shortly before Memorial Day, when United States officials received information from Afghan war detainees that suggested an interest in underwater attacks.

An F.B.I. spokesman said the agency would not confirm even that it had sought any diver names, and would not say how it might use any such information.

The owners of Reef Seekers say they had lots of reasons to turn down the F.B.I. The name-gathering made little sense to begin with, they say, because terrorists would need training far beyond recreational scuba lessons. They also worried that the new law would allow the F.B.I. to pass its client records to other agencies.

When word of their revolt got around, said Bill Wright, one of the owners, one man called Reef Seekers to applaud it, saying, My 15-year-old daughter has taken diving lessons, and I don't want her records going to the F.B.I.

He was in a distinct minority, Mr. Wright said. Several other callers said they hoped the shop would be the next target of a terrorist bombing.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.196625

File: d28eeca48af13c5⋯.png (2.32 MB,2310x1448,1155:724,Screenshot_2024_07_27_at_2….png)

File: 7f6c3609a77e5ed⋯.png (1.8 MB,1424x2878,712:1439,Screenshot_2024_07_27_at_2….png)

https://steemit.com/discussion/@boodles17/is-facebook-dervived-from-darpas-lifelog-project

Is Facebook Dervived from DARPAs Lifelog Project?(Steemit.com)

boodles17 (49)in #discussion • 6 years ago

I think that most of us knows how intrusive Facebook has become especially after the information that has come out over the past several weeks. What I didn't know was that the idea for Facebook may have been originally been derived from the DARPA Lifelog project.

And by DARPA I mean the Defense Advanced Research Projects Agency which is an agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military.

Not only is that creepy, but the date of the Lifelog project being shelved is same date that Facebook was "born": February 4th, 2004

My question is: why pay for data mining of personal information through a military project when you can open a social networking app where the users give their information for "FREE".

I don't know about you but I wouldn't pay for information when people would just give it to me through my app.

Also, the other two members of Facebook's Board of Directors in addition to Zuckerberg (Peter Thiel and James Beyer) have ties to datamining and DARPA as well.

IMG_20180412_215108.jpg

Does this mean that Facebook is just one huge global psych op disguised as a social media site? Other than making masses of money off their users by providing data to advertisers, what did they hope to gain from a military standpoint?

These are questions that will need further research.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.196626

File: 70358e5a9e14189⋯.png (1.4 MB,2666x2622,1333:1311,Screenshot_2024_07_27_at_2….png)

File: f4bbd0a557f8a5c⋯.png (558.71 KB,1804x2664,451:666,Screenshot_2024_07_27_at_2….png)

File: 08e22bb24688cb5⋯.png (412.66 KB,1676x2716,419:679,Screenshot_2024_07_27_at_2….png)

File: 908bc9c37f5cdb0⋯.png (434.63 KB,1836x2706,306:451,Screenshot_2024_07_27_at_2….png)

File: ca93ef2d82128eb⋯.png (365.04 KB,1800x2290,180:229,Screenshot_2024_07_27_at_2….png)

https://www.vice.com/en/article/vbqdb8/15-years-ago-the-military-tried-to-record-whole-human-lives-it-ended-badly

15 Years Ago, the Military Tried to Record Whole Human Lives. It Ended Badly(Vice.com)

In mid-2003, the US Defense Advanced Research Projects Agency launched an ambitious program aimed at recording essentially all of a person's movements and conversations and everything they listened to, watched, read and bought.

The idea behind the LifeLog initiative was to create a permanent, searchable, electronic diary of entire lives. Not only would a lifelog immortalize users, in a sense, it would also contribute to a growing body of data that military researchers hoped would contribute to the development of artificial intelligence capable of thinking like a human being does.

LifeLog was an iPhone before there were iPhones, social media before there was social media. It was potential all-seeing government surveillance before anyone worried about the NSA or had heard of Edward Snowden.

LifeLog arguably was years ahead of its time. But today, it's just a footnote in tech history. Barely a year after it began, the LifeLog program abruptly ended, effectively shamed out of existence by privacy-advocates and the media.

And then, over the following decade, much of what LifeLog aimed to achieve happened, anyway. A failed military cyber-diary from 15 years ago was, in a way, a preview of our smartphone-addicted, Facebooking, government-surveilled present.

At the same time, LifeLog was "a cautionary tale regarding privacy controversies,” its creator Douglas Gage told me during a series of phone and email interviews.

The ideas behind LifeLog are much, much older than the program itself. In 1945, a government scientist named Vannevar Bush described an idea he termed "Memex." It was, in some ways, a prescient flash forward to smartphones.

Read more: The Army Is Working on Brain Hacks to Help Soldiers Deal With Information Overload

Memex, Bush wrote in The Atlantic in 1945, would be a "device in which an individual stores all his books, records and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility."

Of course, 1940s technology wasn't up to the task of recording a person's every conversation and everything they read. It took nearly 70 years for the tech to catch up to Bush's vision. In late 2001, Gordon Bell, a computer scientist consultant, volunteered to be the subject of MyLifeBits, a life-logging experiment run by computer scientists Jim Gemmell and Roger Lueder for Microsoft.

For 17 years running, Bell has digitized and saved, well, everything. "A lifetime’s worth of articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures and voice recordings," according to the project's website.

In later years Bell added phone calls, instant-messaging transcripts, television and radio to his record. Meanwhile, Gemmel and Lueder wrote software for indexing and searching Bell's log.

To the experiment's architects, its value was self-evident. "Given only one thing that could be saved as their house burns down, many people would grab their photo albums or such memorabilia," the three men wrote in a 2002 paper.

DARPA, however, saw the military value in a comprehensive record of a person's life. In late 2002 the agency had launched a wide-ranging effort to develop new, more sophisticated artificial intelligence. The $7.3 million Cognitive Computing initiative included an "enduring personalized cognitive assistant"—basically, an artificial intelligence secretary that could learn by watching.

To replicate human decision-making, the AI assistant would need data on human behavior. Lots of it. Gage, a former Navy researcher with more than 25 years' experience, had recently joined DARPA. He had a plan for gathering that data.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.196627

File: 4784029374ecd19⋯.png (452.75 KB,1810x2816,905:1408,Screenshot_2024_07_27_at_2….png)

File: e55e189422c2094⋯.png (428.09 KB,1694x2742,847:1371,Screenshot_2024_07_27_at_2….png)

File: e975be562516d9d⋯.png (434.38 KB,1712x2776,214:347,Screenshot_2024_07_27_at_2….png)

File: 862df3284a00111⋯.png (51.11 KB,1502x280,751:140,Screenshot_2024_07_27_at_2….png)

>>196626

"I hate to say 'Orwellian,' but I think that's what my reaction was."

Drawing inspiration from Bush and Bell, Gage proposed LifeLog. If enough people recorded enough of their lives, the combined information would amount to "the ontology of a human life," Gage told me.

His bosses liked the idea. "DARPA clearly saw how increasing digitization of human experience would make the data needed to model everyday life accessible in machine-readable form," Lee Tien, a privacy lawyer with the Electronic Frontier Foundation, told me.

Gage got initial approval for his project and, in December 2002, began workshopping the idea with fellow scientists and engineers. "The research community was very enthusiastic," Gage told me.

"My father was a stroke victim, and he lost the ability to record short-term memories," Howard Shrobe, an MIT computer scientist, told Wired in defense of LifeLog. "If you ever saw the movie Memento, he had that. So I'm interested in seeing how memory works after seeing a broken one. LifeLog is a chance to do that."

Privacy advocates, by contrast, reacted with revulsion. "I hate to say 'Orwellian,' but I think that's what my reaction was," Steven Aftergood, an analyst with the Federation of American Scientists, told me. "It seemed like a massively intrusive initiative that went far beyond what an ordinary person would willingly and knowingly consent to."

In 2003, Aftergood, Lee and other experts were on high alert for new, potentially intrusive surveillance technologies. In February of that year, DARPA had launched a new surveillance effort it called "Total Information Awareness." TIA's sophisticated software cross-referenced phone calls, internet traffic, bank records, and other personal data in an effort to identify potential terrorists.

Congress shut down TIA after just a few months. But for Gage and DARPA, the damage was done. "LifeLog has the potential to become something like 'TIA cubed,'" Aftergood told Wired at the time.

Gage told me the criticism took him by surprise. "[Journalist Noah] Shachtman’s Wired article was the full flowering of paranoia," he told me. Gage said he never intended for LifeLog to spy on people. "The critics completely mischaracterized LifeLog as a collection system, when the focus was the classification and fusion of low-level multidimensional data to infer higher level 'knowledge' of the course of a single person’s life."

Gage insisted that LifeLog users would be able to choose which facets of their lives the system recorded, and who had access to the resulting data.

But the pamphlet DARPA handed out to researchers who might want to join the LifeLog program did point to LifeLog's potential as a surveillance tool. "LifeLog will be able … to infer the user’s routines, habits and relationships with other people, organizations, places, and objects," the pamphlet explained, "and to exploit these patterns to ease its task."

*

News of the program spread.

In June 2003, The New York Times' William Safire blasted LifeLog as an "all-remembering cyberdiary" with insidious side-effects as people became walking government data-collectors. "Everybody would be snooping on everybody else," Safire warned.

LifeLog's problems multiplied. In July 2003, DARPA began offering grants in support of Gage's work. The grant guidelines seemed to underscore the privacy concerns. "Researchers who receive LifeLog grants will be required to test the system on themselves," Shachtman explained in a July 2003 follow-up Wired article.

"Cameras will record everything they do during a trip to Washington, DC, and global-positioning satellite locators will track where they go," Shachtman wrote. "Biomedical sensors will monitor their health. All the e-mail they send, all the magazines they read, all the credit card payments they make will be indexed and made searchable."

The writing was on the wall. In February 2004, then-DARPA director Tony Tether cancelled LifeLog. "Change in priorities," agency spokesperson Jan Walker explained.

Gage was in the middle of evaluating proposals and preparing to hire researchers when Tether pulled the plug. "I think he had been burnt so badly with TIA that he didn’t want to deal with any further controversy with LifeLog," Gage told me. "The death of LifeLog was collateral damage tied to the death of TIA."

"Canceling it was the path of least resistance," Aftergood added.

Not long after LifeLog's demise, Gage's contract came up for renewal. "Tony elected not to extend my appointment," said Gage, now retired from government service. In the time since, he's done some part-time consulting and taken up sailing and choral singing.

Absent Gage, aspects of LifeLog might have survived, albeit under a different name. "It would not surprise me to learn that the government continued to fund research that pushed this area forward without calling it LifeLog," Lee said. As far as we know, nothing came of DARPA’s AI secretary.

It was the private sector, not the government, that is coming close to turning Gage's LifeLog, Bell's MyLifeBits, and Bush's Memex into reality for millions of people. And ironically for privacy advocates, we practically beg for it.

In 2004, Mark Zuckerberg and Eduardo Saverin founded Facebook. Three years later, Apple introduced the iPhone. Aftergood described smartphones and social media as "LifeLog equivalents."

More recently, wearable devices and smart-home systems like Alexa have accelerated our acceptance of digital life logs, according to Lee.

“I think that Facebook is the real face of pseudo-LifeLog at this point,” Gage said. But LifeLog’s creator said he avoids the all-seeing social network. "I generally avoid using Facebook, only occasionally logging in to see what everyone is up to, and have never 'liked' anything."

His caution is understandable. Both Facebook and Apple have come under fire for gathering users' data and passing it along to the government. "We have ended up providing the same kind of detailed personal information to advertisers and data brokers and without arousing the kind of opposition that LifeLog provoked," Aftergood said.

Gage, for his part, said he's devised his own LifeLog surrogate using Apple's iCalendar. "I mis-use iCal as my diary, and have waded through my travel records and copious piles of personal and professional memorabilia to fill in my past timeline—but, of course, it gets ever more sparse the farther back I go," Gage told me.

"I would like to tie all my photos into this in a coherent fashion, but I really don’t know how,” Gage added. “I want my LifeLog!"

LifeLog reflected people’s growing willingness and ability to keep a comprehensive digital record of their lives.

But the public has rejected military-developed, government run digital life records in favor of similar systems developed and run by corporations. It doesn’t seem to matter to most people that the corporate social media watch them arguably as much as a government system would have.

And the government mines social media for people’s data, anyway. In October 2016 the American Civil Liberties Union revealed that police had been working with a company called Geofeedia to track peaceful protesters on Facebook, Twitter, and Instagram.

Meanwhile, Silicon Valley firm Palantir set up a "predictive policing" system in New Orleans that helped authorities anticipate potential gang ties between social media users and predict when those suspected gang members might perpetrate crimes.

Apps from Geofeedia and Palantir and other surveillance tools largely tap into data that people voluntarily share on social media. LifeLog reflected people’s growing willingness and ability to keep a comprehensive digital record of their lives—and the government willingness and ability to capture those records—more than it drove those trends.

"The growing digitization of all kinds of personal transactions, combined with the feasibility of collecting and interpreting the resulting data," Aftergood said, "made something like LifeLog conceivable if not inevitable."

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.196628

File: c6614f7f9bb519d⋯.png (5.43 MB,2228x2752,557:688,Screenshot_2024_07_28_at_2….png)

File: c2bd74f3a6e5b26⋯.png (2.23 MB,1382x1498,691:749,Screenshot_2024_07_28_at_2….png)

http://www.shrinkrap.net/2006/11/flogging-gordon-bells-memory.html

Flogging Gordon Bell's Memory

I forget where and when I've heard his name before, but when I got to the airport and picked up something to read on the plane, my thalamus filtered down onto Fast Company's cover article (What If You Never Forgot Anything? [use acces code FCNOVENG]) on Microsoft's Gordon Bell, ringing a bell in my head.

The bell ringing was attached to other dimly recalled bits -- Xanadu (or something like that... [ed:it's Memex]), a guy from the 1950's (what is his name [ed:Vannevar Bush]? I wish BWI had free internet access so I could google it), and past articles I've read about neurocomputer interfaces). The article is about this brilliant Microsoft researcher who has spent the last 7 years recording every single interaction he has. Conversations, phone calls, emails, faxes, paper documents... you name it. He accumulates an average or over a megabyte per hour, a gigabyte per month. He's obviously not keeping any video (but he does snap a photo every 60 seconds).

Why? It's an experiment in computer-assisted human memory, or maybe call it "memory augmentation". It's a log of your life, or a lifelog. Editor Mark Vamos would miss the ability to forget those memories which evoke embarrassment or regret, but the delete key (or hard drive failure) could take care of that.

Okay... it came back to me as I read the article. I've been to his website before MyLifeBits when I saw something about this a few years ago. I can see the utility of something like this.

Well, the challenge with this sort of thing (which, I must say, is pretty cool) is not in doing it. It lies in the ability to search the info... searching text (easy), audio (harder), and images (harder still) ... while also being about to easily access and efficiently use associated data and metadata.

If you want to start your own MyLifeBits experiment, writer Clive Thompson includes a 7-item shopping list [use acces code FCNOVENG]

So, I'm thinking about the impact this would have on Psychiatry. Now I'm putting myself in the patient's place. Recorders blaring, I could easily review my therapy session and get more bang for my emotional buck. If my therapist flogged too (flog=lifelog... I'm still on the plane so I cannot google "flog", but I'm sure that I can't be the first to coin this term), I could tap into her system and see my reactions from her perspective, maybe in a picture-in-picture sorta deal.

Some quotes...

"Frank Nack: 'I'm a big fan of forgetting. I don't want to be reminded of everything I said.' Forgetting ... is key to cultural concepts like forgiveness and nostalgia."

"...knowing that everything is being logged might actually turn us into different people. We might be less flamboyant, less funny, less willing to say risky but potentially useful things..."

"If you lose your keys, you can scroll back and figure out where you put 'em."

"But the real goal is to 'discover things that even you didn't know that you knew.' "

"In spring 2004, Gemmel lost a chunk of his memory... [His] hard drive crashed, and he hadn't backed up in four months. When he got his MyLifeBits back up and running, the hole that had been punched in his memories was palpable, even painful."

The article also reviews experimental software which mines the data in Gordon's LifeBits. It associates unexpected ideas based on past memories, recalls long-forgotten bits at just the right time, and creates new information, connections, and ideas buried in your flogs.

Like a good therapist.

This could put quite a few therapists out of business. But it would also open up a whole new area of psychotherapy -- lifelog-assisted psychotherapy ("flog therapy"?). This could only develop after folks have flogged quite a bit of their life, I would think. So the therapist would become a sort of guide, teaching folks new, psychodynamically-informed methods of mining their flogs and tapping into their "unconscious."

Well, I guess I've gone out on a limb here. But probably not much further than I did in Reality Therapy Vlog.

The flight attendant is making us put our portable electronic devices away and place our tray tables in their upright, locked, position. If I had my flogging equipment, I'd show you all her picture (looks kinda like Bjork, very cute) and you could hear her admonish the guy in front of me who was refusing to turn off his iPod. Alas, it will all be a dim memory in a few weeks. Gotta go.

Blogged with Flock

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.196728

YouTube embed. Click thumbnail to play.

VIDEO: Vannevar Bush (YouTube)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.196739

File: 18668a184470eb4⋯.png (590.19 KB,1070x2440,107:244,Screenshot_2024_07_29_at_0….png)

File: 7d94dea323d083e⋯.png (581.75 KB,1150x2602,575:1301,Screenshot_2024_07_29_at_0….png)

File: 446b41cac038768⋯.png (596.18 KB,1102x2766,551:1383,Screenshot_2024_07_29_at_0….png)

https://web.archive.org/web/20120314184545/http://research.microsoft.com/en-us/projects/mylifebits/default.aspx

MyLifeBits(WayBack Link)

MylifeBits is a lifetime store of everything. It is the fulfillment of Vannevar Bush’s 1945 Memex vision including full-text search, text & audio annotations, and hyperlinks.

Total Recall is coming out this September. This book is the culimation of our thoughts regarding MyLifebits and the larger CARPE research agenda. Stay up to date at the Total Recall blog.

There are two parts to MyLifeBits: an experiment in lifetime storage, and a software research effort.

The experiment:Gordon Bell has captured a lifetime's worth of articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally. He is now paperless, and is beginning to capture phone calls, IM transcripts, television, and radio.

The software research:Jim Gemmell and Roger Lueder have developed the MyLifeBits software, which leverages SQL server to support: hyperlinks, annotations, reports, saved queries, pivoting, clustering, and fast search. MyLifeBits is designed to make annotation easy, including gang annotation on right click, voice annotation, and web browser integration. It includes tools to record web pages, IM transcripts, radio and television. The MyLifeBits screensaver supports annotation and rating. We are beginning to explore features such as document similarity ranking and faceted classification. We have collaborated with the WWMX team to get a mapped UI, and with the SenseCam team to digest and display SenseCam output.

Support for academic research: Our team led the 2005 Digital Memories (Memex) RFP, which supported 14 univerities and led to an impressive list of publications. We also established the ACM CARPE Workshops: CARPE 2004 CARPE 2005 CARPE 2006

Watch our demo videos

Papers

Gordon Bell and Jim Gemmell, A Digital Life, Scientific American, March 2007. German version in Spektrum der Wissenschaft (April, 2007)

Wang, Zhe, and Gemmell, Jim, Clean Living: Eliminating Near-Duplicates in Lifetime Personal Storage, Microsoft Research Technical Report MSR-TR-2006-30, March 2006.

Jim Gemmell, Gordon Bell and Roger Lueder, MyLifeBits: a personal database for everything, Communications of the ACM, vol. 49, Issue 1 (Jan 2006), pp. 88.95. PDF (0.5 MB)

Extended version published as Microsoft Research Technical Report MSR-TR-2006-23 Word (3MB)PDF (1MB) Abstract

Gemmell, Jim, Aris, Aleks, and Lueder, Roger, Telling Stories With MyLifeBits, ICME 2005, July 6-9 2005 PDF (1 MB)

Gemmell, Jim, Williams, Lyndsay, Wood, Ken, Bell, Gordon and Lueder, Roger, Passive Capture and Ensuing Issues for a Personal Lifetime Store, Proceedings of The First ACM Workshop on Continuous Archival and Retrieval of Personal Experiences (CARPE '04), Oct. 15, 2004, New York, NY, USA, pp. 48-55. Word (2 MB) PDF (1 MB)

Aris, Aleks, Gemmell, Jim and Lueder, Roger, Exploiting Location and Time for Photo Search and Storytelling in MyLifeBits, Microsoft Research Technical Report MSR-TR-2004-102, October 2004 Word (1.5MB) PDF (0.8 MB) Abstract

Gemmell, Jim, Lueder, Roger, and Bell, Gordon, The MyLifeBits Lifetime Store, ACM SIGMM 2003 Workshop on Experiential Telepresence (ETP 2003), November 7, 2003, Berkeley, CA. Word (1.5 MB)PDF (1.5 MB)

Living With a Lifetime Store, Gemmell, Jim, Lueder, Roger, and Bell, Gordon, ATR Workshop on Ubiquitous Experience Media, Sept. 9-10, 2003, Keihanna Science City, Kyoto, Japan. Word (1.5MB)PDF (1.5MB)

MyLifeBits: Fulfilling the Memex Vision, Gemmell, Jim, Bell, Gordon, Lueder, Roger, Drucker, Steven, and Wong, Curtis, ACM Multimedia '02, December 1-6, 2002, Juan-les-Pins, France, pp. 235-238. Word (1.4 MB) PDF (297 KB)

Storage and Media in the Future When you Store Everything, Gordon Bell and Jim Gemmell

Presentations

Gordon Bell's SIGMOD Keynote (June 14, 2005): MyLifeBits, A Transaction Processing Database for Everything Personal. The talk included project history, demonstration screens, architecture, size and shape of the Bell database (200,000 items, 100 GBytes), and research challenges for the database community. PowerPoint (22 MB)

Jim Gemmell's MyLifeBits talk given at a number of universities: Feb 2005 version PowerPoint (10 MB)

Gordon Bell's talk, given at BayCHI, on 11 February 2003 at PARC, Palo Alto (4.8 MByte PPT) and U.S. Naval Post Graduate School, Monterey on 6 February 2003.

MyLifeBits: A lifetime personal store beginning at 1:22. Streaming webcast of Bell by Austrian Telecom at Austria's European (Technology) Forum Alpbach, Plenary Session speaker, "The World of Tomorrow", held Thursday 26 August 2004. See also the PowerPoint presentation (approx. 10 MB).

MyLifeBits In The News

Du sollst nicht vergessen, Der Spiegel, 4/14/2008

Total Recall: Storing every life memory in a surrogate brain, ComputerWorld, 4/2/2008

Don't forget to back up your brain, Fox News, 11/14/2007

Remember This?, The New Yorker, May 28, 2007

Total recall becomes a reality, The Telegraph, 4/21/2007

Your Whole Life is Going to Bits, Sydney Morning Herald 4/14/2007

Researcher Records His Life On Computer, CBS Evening News 4/9/2007

Perfect Memory,WATTnow, March 2007

Lifeblogging: Is a virtual brain good for the real one? Ars technica, 2/7/2007

On the Record, All the Time, Chronicle of Higher Education, 2/4/2007

Digital Diary, San Francisco Chronicle, 1/28/2007

The Persistence of Memory, NPR Radio "On the Media" show, 1/5/2007

How Microsoft’s Gordon Bell is Reengineering Human Memory (and Soon, Your Life and Business), Fast Company, Nov 2006.

Digital age may bring total recall in future, CNN 10/16/2006.

El hombre que guarda todos los recuerdos de su vida en bits, La Crónica de Hoy (Mexico), 7/16/2006.

That's My Life, Aria Magazine April 2006.

The ultimate digital diary The Dominion Post 5/31/2006

In 2021 You'll Enjoy Total Recall Popular Science 5/18/2006

The Memory Machine, Varsity.co.uk, 3/2/2006

Life Bytes, NPR Radio "Living on Earth" show, 1/20/2006

The man with the perfect memory - just don't ask him to remember what's in it The Guardian, 12/28/2005

Bytes of my life, Hindustan Times, 11/17/2005

Total Recall, IEEE Spectrum, 11/1/2005 Podcast on IEEE Spectrum Radio (Choose arrow on October 2005 show and select "MyLifeBits – the digitized life of Gordon Bell")

Turning Your Life Into Bits, Indexed, Los Angeles Times 7/11/2005

Wouldn't It Be Nice The Wall Street Journal 5/23/2005

Life Bits IEEE Spectrum Online May 2005

How To Be A Pack Rat, Forbes.com 4/29/2005 - see also blog entry by Thomas Hawk at eHomeUpgrade

Computer sage cuts paperwork, converts his life to digital format The Seattle Time 4/9/2005

Channel 9 video interviews 8/21/2004 IntroGemmellLueder

Slices of Life Spiked-Online 8/19/2004

Next-generation search tools to refine results CNET 8/9/2004

Life in byte-sized pieces The Age, 7/18/2004

Removable Media For Our Minds TheFeature 3/25/2004

This is Your Life San Jose Mercury News 3/6/2004

Navigating Digital Home Networks New York Times 2/19/2004

Offloading Your Memories New York Times Magazine Year in Ideas issue 12/14/2003 "Bright notions, bold inventions, genius schemes and mad dreams that took off (or tried to) in 2003"

Logged on for life Toronto Star 9/8/2003

This is your life–in bits U.S. News & World Report 6/23/2003

My Life in a Terabyte IT-Analysis.com 5/14/2003

How MS will know ALL about you ZD AnchorDesk 4/18/2003

Memories as Heirlooms Logged Into a Database The New York Times 3/20/2003

Microsoft Fair Forecasts Future AP 2/27/2003 (This story ran on many newspapers and news sites, including USA Today, The Globe and Mail, The San Jose Mercury News, and ABC News)

This Is Your Brain on Digits ABC News 2/5/2003

A life in bits and bytes c|net News.com 1/6/2003 (run also by ZDNet)|

Your Life - On The Web Computer Research & Technology 12/20/2002

Saving Your Bits for Posterity Wired 12/6/2002

Microsoft works to create back-up brain Knowledge Management 11/25/2002

Microsoft Creating Virtual Brain NewsFactor Network 11/22/2002

Microsoft solves "giant shoebox problem" Geek.com 11/22/2002

Would you put your life in Microsoft's hands? Silicon.com (run also by ZDNet News) 11/21/2002

Microsoft Plans Digital Memory Box, a Step Toward "Surrogate Brain" BetterHumans 11/21/2002

E-hoard with Microsoft's life database vnunet.com IT Week 11/21/2002

Microsoft plans online life archive BBC News 11/20/2002

Software aims to put your life on a disk New Scientist 11/20/2002

Related links

As We May Think, by Vannevar Bush, The Atlantic Monthly, 176(1), July 1945, 101-108.

Many more links can be found at the CARPE Research Community web site

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.196780

File: 2cf6df16b769fc5⋯.png (571.36 KB,2178x2026,1089:1013,Screenshot_2024_07_29_at_0….png)

File: e644cf9956216b4⋯.png (817.39 KB,2314x2386,1157:1193,Screenshot_2024_07_29_at_0….png)

File: cc4d19efc6a430e⋯.png (855.54 KB,2300x2746,1150:1373,Screenshot_2024_07_29_at_0….png)

https://cryptome.org/spy-dotnet.htm

USA PATRIOT Act Surveillance

By Daniel Brandt NameBase http://www.pir.org/

There have been suggestions recently that the FBI will be fairly aggressive in its use of the new Internet surveillance portions of the anti-terror law.

The source for this is Stewart Baker, a former NSA lawyer [ this was found on a newsgroup through Google Groups ]:

> From: "Baker, Stewart" <SBaker@steptoe.com>

> To: "declan@well.com" <declan@well.com>

> cc: "Albertazzie, Sally" <SAlbertazzie@steptoe.com>

> Subject: Fox News goes overboard

> Date: 29 Oct 2001 09:48:17 -0500

>

> Fox News recently reported that the FBI has a plan to change the

> architecture of the Internet, centralizing it and providing "a

> technical backdoor to the networks of Internet service providers."

> Like many others, I thought this was big news, and rather surprising.

> Until I realized that the reporter only cited one source and that

> it was, well, me. Fox News's claims go beyond the facts I provided

> to her, and beyond any that I know about.

>

> To be clear, I believe that the FBI is at work on an initiative to

> make Internet communications, indeed any packet data communications,

> more susceptible to intercept and more productive of non-content

> data about communications – the sort of "pen register" data that

> was expressly approved for Internet communications in the recent

> antiterrorism bill. This initiative will have architectural

> implications for packet data communications systems. The FBI is

> likely to press providers of those services to centralize communications

> in nodes where interception will be more convenient, and it is

> likely to call on packet data services to build systems that provide

> more information about the communications of their subscribers.

>

> The vehicle for this initiative is CALEA, the Communications

> Assistance for Law Enforcement Act, a 1994 enactment that actually

> requires telecom carriers to redesign their networks to provide

> better wiretap capabilities. The act is supposed to exempt

> information services, but the vagueness of that provision has

> encouraged the FBI to expand its mandate into packet-data

> communications. The Bureau is now preparing a general CALEA proposal

> for all packet-data systems. While I have not seen it, the Bureau's

> past interventions into packet-data and other communications

> architecture have had two characteristics – they have sought more

> centralization in order to simplify interception and they have

> asked providers to generate new data messages about their subscribers'

> activities – messages that are of value only to law enforcement.

>

> There are real legal and policy questions that should be raised

> about this effort. In my view, it goes beyond what Congress intended

> in 1994. And the implications for Internet users and technologies

> deserve to be debated. But making these points, as I did with Fox

> News, is not the same as saying that the FBI has a firm plan to

> centralize the Internet and build back doors into all ISP networks.

> If Fox News wants to break that story, it will need a source other

> than me.

>

> –

> Stewart Baker

> Steptoe & Johnson LLP

> 1330 Connecticut Avenue, N.W.

> Washington, DC 20036

I would like to add a dimension that has not been covered in this discussion of Internet surveillance. One item I brought up earlier was the possibility that the search terms added to the URL for Net searching are now fair game, because they can be considered part of the address as opposed to part of the content. I note with approval that the problem of search terms for engines such as Google has been mentioned both by the ACLU and by the EFF in their analysis of the new law.

But there's a further dimension that occurred to me more recently, that bears watching. That is the dimension of what's referred to generally as "traffic analysis." NameBase has a visualized "proximity search" that draws social network diagrams based on public information, and it was during the course of developing this several years ago that I became familiar with what the intelligence agencies are doing with traffic analysis and data visualization.

Federal agencies such as NSA, CIA, FinCEN, and DEA, have been doing a great deal of traffic analysis of telecommunications. Presumably this has involved circuits outside the U.S., for the most part, and is traffic that is non-Internet in nature, such as telephone toll records. This has been going on for at least ten years. It is a rather well-developed field by now.

There are a number of software vendors that write programs for traffic analysis. These are frequently termed "link analysis" or "cluster analysis" or "network analysis" software, which involves "data mining" and "visualization." These vendors often contract with the government. They don't talk much about their work, but you can search Google Groups and get hints of what's been happening in the field.

The new law allows data sharing between agencies. The FBI, which is now empowered to employ link analysis on the Internet, is in a position to obtain software expertise from those agencies that are more advanced in this field. They are also in a position to place a next-generation Carnivore close to some major Internet hubs, and mine most of the traffic for purposes of traffic analysis.

Pundits like to say that they're not worried, because the FBI couldn't possibly monitor all the data that flows over the Net.

That's not what the FBI wants to do with their Net surveillance, in my opinion. Rather, they want to be able to visualize clusters of cross-connect activity, perhaps based on some prior parameters, and see if the clusters suggest that certain IP addresses may be worthy of further investigation.

This is considered an excellent technique when you don't have any other clues or leads to work, because it at least allows you to get started by focusing on a subset of the data, based on patterns that look interesting.

The fact that nearly all search terms use query string information after the URL means that all these terms are very handy for use in traffic analysis. These terms are tightly focused already, because the user put some thought into which terms will deliver the desired data. It would be very easy to integrate these search terms into a traffic analysis software package. It would be much more interesting than telephone toll numbers, because there's much more data available for crunching in a software program.

When combined with email "to" and "from" addresses, and analyzed on a mainframe or distributed network of computers, you could zero in on anything suspicious happening on the net and target sub-populations of IP addresses for further scrutiny, or for past behavior if you have logs that go back in time. This is all very easy to do. Look what Google has been able to do with its 10,000 networked Linux boxes. It can crawl the entire Web once a month and handle over 110 million searches per day, all on $50 million or so per year, which isn't much by government standards.

There's no possibility of overload, despite the pundits. All you have to do is add more boxes, or scale down the parameters a bit on the front end so that less traffic gets analyzed.

This, I feel, is what the FBI plans to do with their new access to the Net. Essentially, it means the end of Internet privacy.

From: NameBase@cs.com

Date: Thu, 8 Nov 2001 12:19:56 EST

Subject: CIA and web surveillance

To: jya@pipeline.com

http://www.siliconvalley.com/docs/opinion/termsheet/mm110801.htm

2001-11-07

Start-up helps CIA in terrorism fight

Agency's venture arm takes stake in Stratify

By Matt Marshall

Mercury News

The Central Intelligence Agency may seem a bizarre source of support for struggling Silicon Valley start-ups, but it may be a sure patron in a dour economy.

Ask Nimish Mehta, chief executive of Mountain View's Stratify, formerly known as Purple Yogi. His company combs through billions of Web pages to find answers to users' questions.

This week, it accepted millions in venture funding from the CIA's venture capital arm, In-Q-Tel. In return, In-Q-Tel wants Stratify's help in trawling through millions of Web and other electronic documents, including those written in Middle Eastern languages. "That would be nice to have,'' says Eric Kaufmann, a partner at In-Q-Tel's Menlo Park office.

Neither the CIA nor the company will disclose the exact amount of the funding, for fear of offending the CIA's other portfolio companies, which have gotten less. The amount was more than $1 million but less than $5 million.

The deal could be a good omen for Stratify, which wasn't pulling in much revenue under its dot-com business model. Indeed, if Mehta has his way, he'll be stealing a page from Oracle CEO Larry Ellison's playbook.

Rewind about 25 years. Back in the late 1970s, the spy agency became Oracle's first customer. A happy camper with Oracle, the CIA helped open doors for Ellison at other government agencies and corporations.

This way, Ellison survived through the recession years of the early 1980s with no venture capital injections at all. And by not watering down ownership with VC investments, Ellison emerged with 39 percent of Oracle's shares – and since has become the nation's second- or third-richest man.

Mehta, a former Oracle executive himself, says he doesn't want venture capital, and didn't seek out the CIA's investment. He joined Stratify in February, when it was still Purple Yogi, a frugal company that still had $20 million of the $30 million venture funding it had received over the past two years.

But like Ellison, Mehta sees a good customer in the CIA, one that can open similar doors for his company. "I've seen Larry fight that battle, and I want to fight it the same way,'' says Mehta, who once reported directly to Ellison.

The parallels run deeper. Mehta wants Stratify to tap into what he believes is a huge potential market for mining, and then ordering, "unstructured data. Oracle and its early competitors discovered the database-software market – which orders "structured information.

Of the information that a typical company carries on its Web server and computers, 85 percent is unstructured, Mehta says. That's why Mehta says he can build Stratify into a giant that rivals Oracle.

That's also the reason why the CIA is interested. In-Q-Tel's Kaufmann says Stratify is better than its competitors because it creates a hierarchy for the information it seeks, has superior classification technology, and is nimble in the way it allows users to decide what research to conduct. The company is brainy. It has about 15 employees with doctorate degrees. Twenty of its 75 employees are engineers based in India.

Stratify recently won a deal with Infosys, a management-consulting company that uses Stratify's software in the products it offers to clients. Investors say Stratify is more advanced than Autonomy, a publicly traded U.K. competitor. "It can handle millions of documents and can crawl over everything looking for stuff,'' says Bill Burnham, a partner at Softbank Venture Capital and an earlier investor.

He and other investors encouraged Mehta to take up the relationship with the CIA. In times like this, any funding at all is "nothing but positive,'' says Purvi Gandhi, a venture capitalist with H&Q Asia Pacific, who also invested in the company.

The CIA deal was in the works before the Sept. 11 attack, and it was sought out by Gilman Louie, In-Q-Tel's chief executive. Louie, otherwise known as Q a reference to the technologist Q in James Bond movies who shows 007 the latest gadgets is a man who "pulses with energy,'' according to Mehta. That the CIA sought a deal that is relevant for the attack's aftermath is a coincidence, Mehta says.

Mehta has presided over a 21 percent reduction in workforce, preparing the company to survive through 2003 – even before the CIA's investment.

Mehta learned the hard way. His previous company, Sunnyvale's Impresse, went out of business early this year after burning through about $80 million in venture capital.

Purple Yogi was frugal, but Mehta says newly named Stratify is even cheaper now that he's arrived. Forget credit cards, free food, massages or big-budget outings. To have fun, the company created a 21-hole miniature-golf course on premises. Employees went to a baseball game on public transit.

And Mehta's cubicle is tiny. He recalls his senior vice president's digs at Oracle: a sprawling private office, a waiting room, a secretary, a training room and sauna. "My personal bathroom was as big as my cubicle,'' he says, pointing to his new humble digs.

He's not Ellison yet.

END

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.196872

YouTube embed. Click thumbnail to play.

VIDEO: Memex #001 Demo (YouTube)

This is essentially a mechanical browser/search engine.

Vannevar Bush's ideas are what Silicone Valley turned into all the tech we see and use us today.

This was THE model for (browsers) like Safari, Internet Explorer, FireFox, (search engines) like Google/Bing etc.. now use it with sites that hosed information…. From this, MyLifeBit was Vannevar Bush's life recorded to remember his memories through his life which was turned into (LifeLog & later Facebook)..

Fuckerberg NEVER invented shit.. is ONLY a figurehead with a feel good story about college days and dating..

When you know.. you cannot look at things the same way again.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.196873

YouTube embed. Click thumbnail to play.

VIDEO: 1968 “Mother of All Demos” by SRI’s Doug Engelbart and Team (YouTube)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

386ec3 No.197803

YouTube embed. Click thumbnail to play.

VIDEO: Exposing the NSA’s Mass Surveillance of Americans | Cyberwar (YouTUbe)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

386ec3 No.197806

File: 4ad99671fe05c2b⋯.png (1.2 MB,1168x2608,73:163,Screenshot_2024_08_05_at_0….png)

File: 6f86dd8fa190145⋯.png (140.47 KB,1300x1740,65:87,Screenshot_2024_08_05_at_0….png)

File: 31c91b88a38ca13⋯.png (3.38 MB,1358x2530,679:1265,Screenshot_2024_08_05_at_0….png)

File: 88920e3011cfe34⋯.png (1.98 MB,1312x1904,82:119,Screenshot_2024_08_05_at_0….png)

https://www.dailymail.co.uk/news/article-3747202/Paul-Ceglia-supposed-Facebook-founder-disappeared-2015-says-s-running-CIA-want-kill-knowledge-involved-social-media-site.html

Fugitive 'Facebook founder' says he's alive and well but 'running for his life' from CIA because of its secret involvement in the social media site(DailyMail.com)

The self-styled co-founder of Facebook, who disappeared while on house arrest in March 2015, has said he and his family alive and well, but still fleeing a CIA plot to kill him.

Paul Ceglia, 43, claimed in 2010 that he owned 84 per cent of Facebook per an alleged 2003 contract with Mark Zuckerberg. In 2012 he was charged with altering documents to bolster his claim.

Now he and his family - wife Iasia, two teen sons and dog Buddy - are on the run after they fled last year. And according to Bloomberg, Ceglia still fears for his life.

Fugitive: Paul Ceglia (pictured in 2012) claimed he paid for half of Facebook's development in 2003 and lent Mark Zuckerberg code. He is now fighting extradition from Ecuador

Fugitive: Paul Ceglia (pictured in 2012) claimed he paid for half of Facebook's development in 2003 and lent Mark Zuckerberg code. He sued Zuckerberg and Facebook in 2010

In emails sent to the site from August 3-8, Ceglia said he and his family had fled abroad and were now living under the radar, lest the CIA kill them.

Police entered his home in Wellsville, New York, on March 5 to find his ankle bracelet connected to a contraption that was designed to make it look like he was walking around his home.

Ceglia claims that he had a 'very credible' threat that he would be arrested on new charges, jailed and killed - and had to flee before that happened.

'I felt I had no one in government I could trust,' Ceglia wrote in one of four e-mails.

'An opportunity presented itself, so I MacGyver’d some things together and started running for my life.'

Denial: Zuckerberg denied claims. In 2012, Ceglia was accused of doctoring evidence and placed on house arrest. He and his family vanished in 2015, leaving ankle bracelet behind

Denial: Zuckerberg denied claims. In 2012, Ceglia was accused of doctoring evidence and placed on house arrest. He and his family vanished in 2015, leaving ankle bracelet behind

He says the reason for the supposed plot against his life is that his fraud trial might reveal involvement by the CIA's venture capital arm, In-Q-Tel, in Facebook.

Exactly what that alleged involvement was is not clear.

It's not known where Ceglia is, or even if he is truly abroad and applying for asylum, as he says he is.

He will only say that he is 'living on the air in Cincinnati,' a line from the theme tune of TV show 'WKRP in Cincinnati.'

On the run: Ceglia, his wife Iasia (left), his two sons and dog Buddy are all now abroad, he says, and seeking asylum. He says the CIA wants to kill him because of what he knows

On the run: Ceglia, his wife Iasia (left), his two sons and dog Buddy are all now abroad, he says, and seeking asylum. He says the CIA wants to kill him because of what he knows

But Robert Ross Fogg, one of Ceglia’s lawyers in the criminal case, said he is 'relieved' to discover that Ceglia is safe and well, and disappeared under his own volition.

He also asked Ceglia to return, pointing out that a judge in December said there was 'probable cause' for Ceglia's contract claim. 'To win this case, I need him home,' Fogg said.

Ceglia hired Zuckerberg to write code for his now-defunct website Streetfax.com in 2003.

He says he gave Zuckerberg money and access to the Streetfax search engine in an early build of what was then called 'The Face Book'.

Zuckerberg says that his contract was only for Streetfax.com and that he didn't think of Facebook until much later.

Family: Ceglia (pictured with his sons, who are now teens) says the CIA's venture capital arm, In-Q-Tel, had a hand in Facebook, but it's not clear exactly what that means

Family: Ceglia (pictured with his sons, who are now teens) says the CIA's venture capital arm, In-Q-Tel, had a hand in Facebook, but it's not clear exactly what that means

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

bfe022 No.197846

File: de2bab52fb2359e⋯.png (706.69 KB,1534x1974,767:987,Screenshot_2024_08_05_at_1….png)

File: a7ecb4f9510f11f⋯.png (739.89 KB,1528x2000,191:250,Screenshot_2024_08_05_at_1….png)

File: a698fe92612a662⋯.png (786.9 KB,1538x1994,769:997,Screenshot_2024_08_05_at_1….png)

File: b9fdea7be3aae0e⋯.png (845.22 KB,1534x1994,767:997,Screenshot_2024_08_05_at_1….png)

File: 2cb24073fad716b⋯.png (852.1 KB,1538x1988,769:994,Screenshot_2024_08_05_at_1….png)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

2758e5 No.198103

File: 10fe7f091aa6914⋯.png (1.06 MB,1230x2098,615:1049,Screenshot_2024_08_07_at_1….png)

How Google's former CEO Eric Schmidt helped write A.I. laws in Washington without publicly disclosing investments in A.I. startups(SeeDNC.com)

PUBLISHED MON, OCT 24 2022 10:46 AM EDT

UPDATED MON, OCT 24 2022 1:14 PM EDT

Eamon Javers

@EAMONJAVERS

WATCH LIVE

KEY POINTS

Five months after Schmidt was appointed to the National Security Commission on Artificial Intelligence, he made a little-noticed private investment in an initial seed round of financing for a start-up company called Beacon.

It was the first of a handful of direct investments he would make in AI start-up companies during his tenure as chairman of the AI commission.

While there is no indication that Schmidt broke any ethics rules or did anything unlawful, government ethics advisors say his investments presented a huge conflict of interest.

About four years ago, former Google CEO Eric Schmidt was appointed to the National Security Commission on Artificial Intelligence by the chairman of the House Armed Services Committee.

It was a powerful perch. Congress tasked the new group with a broad mandate: to advise the U.S. government on how to advance the development of artificial intelligence, machine learning and other technologies to enhance the national security of the United States.

The mandate was simple: Congress directed the new body to advise on how to enhance American competitiveness on AI against its adversaries, build the AI workforce of the future, and develop data and ethical procedures.

In short, the commission, which Schmidt soon took charge of as chairman, was tasked with coming up with recommendations for almost every aspect of a vital and emerging industry. The panel did far more under his leadership. It wrote proposed legislation that later became law and steered billions of dollars of taxpayer funds to industry he helped build — and that he was actively investing in while running the group.

If you're going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn't also be dipping your hand in the pot and helping yourself to AI investments.

Walter Shaub

SENIOR ETHICS FELLOW, PROJECT ON GOVERNMENT OVERSIGHT

His credentials, however, were impeccable given his deep experience in Silicon Valley, his experience advising the Defense Department, and a vast personal fortune estimated at about $20 billion.

Five months after his appointment, Schmidt made a little-noticed private investment in an initial seed round of financing for a startup company called Beacon, which uses AI in the company's supply chain products for shippers who manage freight logistics, according to CNBC's review of investment information in database Crunchbase.

There is no indication that Schmidt broke any ethics rules or did anything unlawful while chairing the commission. The commission was, by design, an outside advisory group of industry participants, and its other members included well-known tech executives including Oracle CEO Safra Catz, Amazon Web Services CEO Andy Jassy and Microsoft Chief Scientific Officer Dr. Eric Horvitz, among others.

'Conflict of interest'

Schmidt's investment was just the first of a handful of direct investments he would make in AI startup companies during his tenure as chairman of the AI commission.

"It's absolutely a conflict of interest," said Walter Shaub, a senior ethics fellow at the Project on Government Oversight, and a former director of the U.S. Office of Government Ethics.

"That's technically legal for a variety of reasons, but it's not the right thing to do," Shaub said.

Venture capital firms financed, in part, by Schmidt and his private family foundation also made dozens of additional investments in AI companies during Schmidt's tenure, giving Schmidt an economic stake in the industry even as he developed new regulations and encouraged taxpayer financing for it. Altogether, Schmidt and entities connected to him made more than 50 investments in AI companies while he was chairman of the federal commission on AI. Information on his investments isn't publicly available.

All that activity meant that, at the same time Schmidt was wielding enormous influence over the future of federal AI policy, he was also potentially positioning himself to profit personally from the most promising young AI companies.

Institutional issues

Schmidt's conflict of interest is not unusual. The investments are an example of a broader issue identified by ethics reformers in Washington, D.C.: outside advisory committees that are given significant sway over industries without enough public disclosure of potential conflicts of interest. "The ethics enforcement process in the executive branch is broken, it does not work," said Craig Holman, a lobbyist on ethics, lobbying and campaign finance for Public Citizen, the consumer advocacy organization. "And so the process itself is partly to blame here."

The federal government counts a total of 57 active federal advisory commissions, with members offering input on everything from nuclear reactor safeguards to environmental rules and global commodities markets.

For years, reformers have tried to impose tougher ethics rules on Washington's sprawling network of outside advisory commissions. In 2010, then-President Barack Obama used an executive order to block federally registered lobbyists from serving on federal boards and commissions. But a group of Washington lobbyists fought back with a lawsuit arguing the new rule was unfair to them, and the ban was scaled back.

'Fifth arm of government'

The nonprofit Project on Government Oversight has called federal advisory committees the "fifth arm of government" and has pushed for changes including additional requirements for posting conflict-of-interest waivers and recusal statements, as well as giving the public more input in nominating committee members. Also in 2010, the House passed a bill that would prohibit the appointment of commission members with conflicts of interest, but the bill died in the Senate.

"It's always been this way," Holman said. "When Congress created the Office of Government Ethics to oversee the executive branch, you know, they didn't really want a strong ethics cop, they just wanted an advisory commission." Holman said each federal agency selects its own ethics officer, creating a vast system of more than 4,000 officials. But those officers aren't under the control of the Office of Government Ethics – there's "no one person in charge," he said.

Eric Schmidt during a news conference at the main office of Google Korea in Seoul on November 8, 2011.

Eric Schmidt during a news conference at the main office of Google Korea in Seoul on November 8, 2011.

Jung Yeon-je | Afp | Getty Images

People close to Schmidt say his investments were disclosed in a private filing to the U.S. government at the time. But the public and the news media had no access to that document, which was considered confidential. The investments were not revealed to the public by Schmidt or the commission. His biography on the commission's website detailed his experiences at Google, his efforts on climate change and his philanthropy, among other details. But it did not mention his active investments in artificial intelligence.

A spokesperson for Schmidt told CNBC that he followed all rules and procedures in his tenure on the commission, "Eric has given full compliance on everything," the spokesperson said.

But ethics experts say Schmidt simply should not have made private investments while leading a public policy effort on artificial intelligence.

"If you're going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn't also be dipping your hand in the pot and helping yourself to AI investments," said Shaub of the Project on Government Oversight.

Shaub said there were several ways Schmidt could have minimized this conflict of interest: He could have made the public aware of his AI investments, he could have released his entire financial disclosure report, or he could have made the decision not to invest in AI while he was chair of the AI commission.

Public interest

"It's extremely important to have experts in the government," Shaub said. "But it's, I think, even more important to make sure that you have experts who are putting the public's interests first."

The AI commission, which Schmidt chaired until it expired in the fall of 2021, was far from a stereotypical Washington blue-ribbon commission issuing white papers that few people actually read.

Instead, the commission delivered reports which contained actual legislative language for Congress to pass into law to finance and develop the artificial intelligence industry. And much of that recommended language was written into vast defense authorization bills. Sections of legislative language passed, word for word, from the commission into federal law.

The commission's efforts also sent millions of taxpayer dollars to priorities it identified. In just one case, the fiscal 2023 National Defense Authorization Act included $75 million "for implementing the National Security Commission on Artificial Intelligence recommendations."

At a commission event in September 2021, Schmidt touted the success of his team's approach. He said the commission staff "had this interesting idea that not only should we write down what we thought, which we did, but we would have a hundred pages of legislation that they could just pass." That, Schmidt said, was "an idea that had never occurred to me before but is actually working."

$200 billion modification

Schmidt said one piece of legislation moving on Capitol Hill was "modified by $200 billion." That, he said, was "essentially enabled by the work of the staff" of the commission.

At that same event, Schmidt suggested that his staff had wielded similar influence over the classified annexes to national security-related bills emanating from Congress. Those documents provide financing and direction to America's most sensitive intelligence agencies. To protect national security, the details of such annexes are not available to the American public.

"We don't talk much about our secret work," Schmidt said at the event. "But there's an analogous team that worked on the secret stuff that went through the secret process that has had similar impact."

Asked whether classified language in the annex proposed by the commission was adopted in legislation that passed into law, a person close to Schmidt responded, "due to the classified nature of the NSCAI annex, it is not possible to answer this question publicly. NSCAI provided its analysis and recommendations to Congress, to which members of Congress and their staff reviewed and determined what, if anything, could/should be included in a particular piece of legislation."

Beyond influencing classified language on Capitol Hill, Schmidt suggested that the key to success in Washington was being able to push the White House to take certain actions. "We said we need leadership from the White House," Schmidt said at the 2021 event. "If I've learned anything from my years of dealing with the government, is the government is not run like a tech company. It's run top down. So, whether you like it or not, you have to start at the top, you have to get the right words, either they say it, or you write it for them, and you make it happen. Right? And that's how it really, really works."

Industry friendly

The commission produced a final report with top-line conclusions and recommendations that were friendly to the industry, calling for vastly increased federal spending on AI research and a close working relationship between government and industry.

The final report waived away concerns about too much government intervention in the private sector or too much federal spending.

"This is not a time for abstract criticism of industrial policy or fears of deficit spending to stand in the way of progress," the commission concluded in its 2021 report. "In 1956, President Dwight Eisenhower, a fiscally conservative Republican, worked with a Democratic Congress to commit $10 billion to build the Interstate Highway System. That is $96 billion in today's world."

The commission didn't go quite that big, though. In the end, it recommended $40 billion in federal spending on AI, and suggested it should be done hand in hand with tech companies.

"The federal government must partner with U.S. companies to preserve American leadership and to support development of diverse AI applications that advance the national interest in the broadest sense," the commission wrote. "If anything, this report underplays the investments America will need to make."

The urgency driving all of this, the commission said, is Chinese development of AI technology that rivals the software coming out of American labs: "China's plans, resources, and progress should concern all Americans."

China, the commission said, is an AI peer in many areas and a leader in others. "We take seriously China's ambition to surpass the United States as the world's AI leader within a decade," it wrote.

But Schmidt's critics see another ambition behind the commission's findings: steering more federal dollars toward research that can benefit the AI industry.

"If you put a tech billionaire in charge, any framing that you present them, the solution will be, 'give my investments more money,' and that's indeed what we see," said Jack Poulson, executive director of the nonprofit group Tech Inquiry. Poulson formerly worked as a research scientist at Google, but he resigned in 2018 in protest of what he said was Google bending to the censorship demands of the Chinese government.

Too much power?

To Poulson, Schmidt was simply given too much power over federal AI policy. "I think he had too much influence," Poulson said. "If we believe in a democracy, we should not have a couple of tech billionaires, or, in his case, one tech billionaire, that is essentially determining US government allocation of hundreds of billions of dollars."

The federal commission wound down its work on Oct. 1, 2021.

Four days later, on Oct. 5, Schmidt announced a new initiative called the Special Competitive Studies Project. The new entity would continue the work of the congressionally created federal commission, with many of the same goals and much of the same staff. But this would be an independent nonprofit and operate under the financing and control of Schmidt himself, not Congress or the taxpayer. The new project, he said, will "make recommendations to strengthen America's long-term global competitiveness for a future where artificial intelligence and other emerging technologies reshape our national security, economy, and society."

The CEO of Schmidt's latest initiative would be the same person who had served as the executive director of the National Security Commission on Artificial Intelligence. More than a dozen staffers from the federal commission followed Schmidt to the new private sector project. Other people from the federal commission came over to Schmidt's private effort, too: Vice Chair Robert Work, a former deputy secretary of defense, would serve on Schmidt's board of advisors. Mac Thornberry, the congressman who appointed Schmidt to the federal commission in the first place, was now out of office and would also join Schmidt's board of advisors.

They set up new office space just down the road from the federal commission's headquarters in Crystal City, Virginia, and began to build on their work at the federal commission.

The new Special Competitive Studies Project issued its first report on Sept. 12. The authors wrote, "Our new project is privately funded, but it remains publicly minded and staunchly nonpartisan in believing technology, rivalry, competition and organization remain enduring themes for national focus."

The report calls for the creation of a new government entity that would be responsible for organizing the government-private sector nexus. That new organization, the report says, could be based on the roles played by the National Economic Council or the National Security Council inside the White House.

It is not clear if the project will disclose Schmidt's personal holdings in AI companies. So far, it has not.

Asked if Schmidt's AI investments will be disclosed by the project in the future, a person close to Schmidt said, "SCSP is organized as a charitable entity, and has no relationship to any personal investment activities of Dr. Schmidt." The person also said the project is a not-for-profit research entity that will provide public reports and recommendations. "It openly discloses that it is solely funded by the Eric and Wendy Schmidt Fund for Strategic Innovation."

In a way, Schmidt's approach to Washington is the culmination of a decade or more as a power player in Washington. Early on, he professed shock at the degree to which industry influenced policy and legislation in Washington. But since then, his work on AI suggests he has embraced that fact of life in the capital.

Obama donor

Schmidt first came to prominence on the Potomac as an early advisor and donor to the first presidential campaign of Barack Obama. Following the 2008 election, he served on Obama's presidential transition and as a presidential advisor on science and technology issues. Schmidt had risen to the heights of power and wealth in Silicon Valley, but what he saw in the nation's capital surprised him.

In a 2010 conversation with The Atlantic's then Editor-in Chief James Bennet, Schmidt told a conference audience what he had learned in his first years in the nation's capital. "The average American doesn't realize how much the laws are written by lobbyists," Schmidt said. "It's shocking now, having spent a fair amount of time inside the system, how the system actually works. It is obvious that if the system is organized around incumbencies writing the laws, the incumbencies will benefit from the laws that are written."

Bennet, pushing back, suggested that Google was already one of the greatest incumbent corporations in America.

"Well, perhaps," Schmidt replied in 2010. "But we don't write the laws."

— CNBC's Paige Tortorelli, Bria Cousins, Scott Zamost and Margaret Fleming contributed to this report.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

a38092 No.208355

YouTube embed. Click thumbnail to play.

VIDEO: What the CIA Doesn’t Want You to Know (It Happens To You Everyday) (YouTube)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223403

File: 968d05daff5fc89⋯.png (739.17 KB,1978x2186,989:1093,Screenshot_2025_03_08_at_0….png)

File: 6e08c1af06b97ae⋯.png (730.1 KB,1958x2128,979:1064,Screenshot_2025_03_08_at_0….png)

https://archive.epic.org/privacy/profiling/tia/doc_analysis.html

EPIC Analysis of Total Information Awareness Contractor Documents February 2003(Archive.epic.org)

THE CASE

This is the first release of documents obtained by EPIC about the Total Information Awareness program following a Freedom of Information Act lawsuit against the Defense Department. EPIC v. Department of Defense, No. 02-1233 (D.C. Dist. Ct. 2002).

The Department of Defense attempted to block the public release of these documents by imposing unprecedented fees on EPIC, a public interest research organization. EPIC challenged the fee determination, and a federal district court ruled for EPIC and against the Department of Defense. The court held that EPIC is entitled to "preferred fee status" under the FOIA and ordered the Pentagon to "expeditiously" process EPIC's almost year-old request for information concerning Admiral John Poindexter and the Information Awareness Office.

After the decision EPIC held discussions with the Defense Advanced Research Projects Agency (DARPA) to streamline the document processing. EPIC anticipates receiving more documents covering various aspects of DARPA's data mining activities and the Total Information Awareness program over the next few months. These documents will be made available by EPIC as they are received.

THE DOCUMENTS

This first batch of documents are letters from Admiral Poindexter to various companies who submitted projects for grants under DARPA's solicitation notice, BAA-02-08, which was published on March 21, 2002. BAA-02-08 is a solicitation notice covering the Defense Department's Total Information Awareness program and states that:

DARPA is soliciting innovative research proposals in the area of information technologies that will aid in the detection, classification, identification, and tracking of potential foreign terrorists, wherever they may be, to understand their intentions, and to develop options to prevent their terrorist acts. Proposed research should investigate innovative approaches that enable revolutionary advances in science, technology or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.

Companies were given the opportunity to submit their proposals for evaluation. There are several rounds of evaluations and DARPA stops accepting proposals on March 21, 2003. The letters are DARPA's responses, stating either approval or rejection of the research project. The letters provide information on the contractors, their project title, and the government contact for the project, if it met with DARPA approval.

The documents do not show how much funding the proposal received. The "Broad Agency Announcement" (or "BAA") notes that most proposals should anticipate receiving between $200,000 – 1 million per year. Of the 180, there are 26 approval letters. The most recent letter released is dated December 4, 2002. The list of contractors who sought funding range from large corporation, including Lockheed Martin and Raytheon, to small technology start-ups and and large research universities.

The proposal control number noted in the approval letter's reference line appears to indicate the technical topic areas under the BAA. For example, Hicks & Associates were granted funding for "Information Awareness Prototype System Development" and their proposal control number is 3.09. Alphatech, Inc. was granted funding for "Extensible Probablistic Repository Technology" and their number is 1.01. See table for more information on other approved contractors. The three topic areas are:

1. Repository technologies

This is described by the BAA as the "Development of revolutionary technology for ultra-large all-source information repositories and associated privacy protection technologies." The notice describes the storage technology issues in more detail:

The National Security Community has a need for very large scale databases covering comprehensive information about all potential terrorist threats; those who are planning, supporting or preparing to carry out such events; potential plans; and potential targets. In the context of this BAA, the term "database" is intended to convey a new kind of extremely large, omni-media, virtually-centralized, and semantically-rich information repository that is not constrained by today's limited commercial database products – we use "database" for lack of a more descriptive term. DARPA seeks innovative technologies needed to architect, populate, and exploit such a database for combating terrorism.

The technologies, as conceived by the BAA, also include:

Technologies for controlling automated search and exploitation algorithms and for purging data structures appropriately. Business rules are required to enforce security policy and views appropriate for the viewer's role. The potential sources of information about possible terrorist activities will include extensive existing databases. Innovative technologies are sought for treating these databases as a virtual, centralized, grand database.

2. Collaboration, Automation and Cognitive Aids technologies

This is described as "Development of collaboration, automation, and cognitive aids technologies that allow humans and machines to think together about complicated and complex problems more efficiently and effectively." DARPA is seeking technologies that would:

[A]id the human intellect as teams collaborate to build models of existing threats, generate a rich set of threat scenarios, perform formal risk analysis, and develop options to counter them. These tools should provide structure to the collaborative cognitive work, and externalize it so that it can be examined, critiqued, used to generate narrative and multi-media explanations, and archived for re-use.

3. Prototype System technologies

This is the most significant focus of the Total Information Awareness program and is described in the BAA as:

Development and implementation of an end-to-end, closed-loop prototype system to aid in countering terrorism through prevention by integrating technology and components from existing DARPA programs such as: Genoa, EELD (Evidence Extraction and Link Discovery), WAE (Wargaming the Asymmetric Environment), TIDES (Translingual Information Detection, Extraction and Summarization), HID (Human Identification at Distance), Bio-Surveillance; as well as programs resulting from the first two [topic] areas of this BAA and other programs.

According to BAA-02-08 the main focus of the TIA program is to build "usable tools, rather than demonstrations." The notice states that, "The idea is to enable our partners in the intelligence community to evaluate new technology and pick it up for experimental use and transition, as appropriate." The third topic area appears to be aimed at creating the experimental "leave behind prototypes" that the research project aims to create.

OTHER IMPLICATIONS

The government contacts indicate potential users or developers of the TIA technology. The contacts are from three branches of the Defense Department: the Air Force Research Laboratory, the Navy's Space and Naval Warfare Systems (SPAWAR), and DARPA Information Awareness Office itself. In addition, funding for three approved projects comes from the Information Exploitation Office of DARPA. The Air Force Research Laboratory's "Information Directorate" based in Rome, NY was developing elements of TIA technology under Douglas Dyer, who has now moved to DARPA and is also the author of BAA-02-08. The Navy's SPAWAR program also appears interested in developing large-scale repository and data mining capabilities. It is not clear how these technologies might be useful for the Air Force and Navy in their respective "battlespaces" that they operate in and why they are funding the development of domestic surveillance infrastructure.

For more information, see EPIC's Total Information Awareness page.

EPIC Privacy Page | EPIC Home Page

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.223406

YouTube embed. Click thumbnail to play.

VIDEO: Total Information Awareness Program (YouTube)

PRE Facebook program, that ended up being what Facebook does daily, across the web, regardless if you have an account or not. Blamed on 9/11 to start it.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223407

File: 030e0ab941496f6⋯.png (301.52 KB,1674x830,837:415,Screenshot_2025_03_08_at_0….png)

File: ab9d423b54c0df9⋯.png (1.97 MB,1846x2654,923:1327,Screenshot_2025_03_08_at_0….png)

File: 4df973061edc071⋯.png (395.17 KB,1224x2614,612:1307,Screenshot_2025_03_08_at_0….png)

File: 8790d848eff0f9d⋯.png (438.58 KB,1198x2850,599:1425,Screenshot_2025_03_08_at_0….png)

https://www.technologyreview.com/2019/07/30/133986/facebook-is-funding-brain-experiments-to-create-a-device-that-reads-your-mind/

Facebook is funding brain experiments to create a device that reads your mind

Big tech firms are trying to read people’s thoughts, and no one’s ready for the consequences.

By Antonio Regaladoarchive page

July 30, 2019

Facebook prototype thought helmet

Facebook prototype thought helmet

FACEBOOK

In 2017, Facebook announced that it wanted to create a headband that would let people type at a speed of 100 words per minute, just by thinking.

Now, a little over two years later, the social-media giant is revealing that it has been financing extensive university research on human volunteers.

Today, some of that research was described in a scientific paper from the University of California, San Francisco, where researchers have been developing “speech decoders” able to determine what people are trying to say by analyzing their brain signals.

The research is important because it could help show whether a wearable brain-control device is feasible and because it is an early example of a giant tech company being involved in getting hold of data directly from people’s minds.

To some neuro-ethicists, that means we are going to need some rules, and fast, about how brain data is collected, stored, and used.

In the report published today in Nature Communications, UCSF researchers led by neuroscientist Edward Chang used sheets of electrodes, called ECoG arrays, that were placed directly on the brains of volunteers.

The scientists were able to listen in in real time as three subjects heard questions read from a list and spoke simple answers. One question was “From 0 to 10, how much pain are you in?” The system was able to detect both the question and the response of 0 to 10 far better than chance.

Another question asked was which musical instrument they preferred, and the volunteers were able to answer “piano” and “violin.” The volunteers were undergoing brain surgery for epilepsy.

Facebook says the research project is ongoing, and that is it now funding UCSF in efforts to try to restore the ability to communicate to a disabled person with a speech impairment.

Eventually, Facebook wants to create a wearable headset that lets users control music or interact in virtual reality using their thoughts.

To that end, Facebook has also been funding work on systems that listen in on the brain from outside the skull, using fiber optics or lasers to measure changes in blood flow, similar to an MRI machine.

Such blood-flow patterns represent only a small part of what’s going on in the brain, but they could be enough to distinguish between a limited set of commands.

“Being able to recognize even a handful of imagined commands, like ‘home,’ ‘select,’ and ‘delete,’ would provide entirely new ways of interacting with today's VR systems—and tomorrow's AR glasses,” Facebook wrote in a blog post.

Facebook has plans to demonstrate a prototype portable system by the end of the year, although the company didn’t say what it would be capable of, or how it would measure the brain.

Privacy question

Research on brain-computer interfaces has been speeding up as rich tech companies jump in. On July 16, Neuralink, a brain interface company formed by SpaceX founder Elon Musk, said it hoped to implant electrodes into the brains of paralyzed volunteers within two years.

However, the public has reason to doubt whether tech companies can be trusted with a window into their brains. Last month, for example, Facebook was hit with a record $5 billion fine for deceiving customers about how their personal information gets used.

“To me the brain is the one safe place for freedom of thought, of fantasies, and for dissent,” says Nita Farahany, a professor at Duke University who specializes in neuro-ethics. “We’re getting close to crossing the final frontier of privacy in the absence of any protections whatsoever.”

Facebook emphasizes that all the brain data collected at UCSF will stay at the university, but Facebook employees are able to go there to study it.

It’s not known how much money Facebook is providing the university nor how much volunteers know about the company’s role. A university spokesman, Nicholas Weiler, declined to provide a copy of the research contract or the consent forms signed by patients. He said the consent forms list Facebook among several potential sponsors of the research.

While a brain reader could be a convenient way to control devices, it would also mean Facebook would be hearing brain signals that could, in theory, give it much more information, like how people are reacting to posts and updates.

“Brain data is information-rich and privacy sensitive, it’s a reasonable concern,” says Marcello Ienca, a brain-interface researcher at ETH in Zurich. “Privacy policies implemented at Facebook are clearly insufficient.”

Facebook says it will do better with brain data. “We take privacy very seriously,” says Mark Chevillet, who leads the brain reading project at Facebook.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223408

File: dcfa55c0fdd676c⋯.png (3.42 MB,1448x1728,181:216,Screenshot_2025_03_08_at_0….png)

https://www.nytimes.com/2021/11/12/opinion/facebook-privacy.html

You Are the Object of a Secret Extraction Operation

Dr. Zuboff is a professor emeritus at Harvard Business School and the author of “The Age of Surveillance Capitalism.”

Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data — to its vision of connecting the entire world. Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.

These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law. The result has been a hidden revolution in how information is produced, circulated and acted upon. A parade of revelations since 2016, amplified by the whistle-blower Frances Haugen’s documentation and personal testimony, bears witness to the consequences of this revolution.

The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit. The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.

These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.

There is no way to escape the machine systems that surveil us, whether we are shopping, driving or walking in the park. All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.

Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”? Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?

Search and Seizure

Facebook as we now know it was fashioned from Google’s rib. Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.

By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.” Predictions were computed initially by analyzing data trails that users unknowingly left behind in the company’s servers as they searched and browsed Google’s pages. Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.

Prediction was the first imperative that determined the second imperative: extraction. Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors. User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.

When asked “What is Google?” the co-founder Larry Page laid it out in 2001, according to a detailed account by Douglas Edwards, Google’s first brand manager, in his book “I’m Feeling Lucky”: “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”

Sign up for the Opinion Today newsletter Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning.

Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data. Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.

Massive-scale extraction operations were the keystone to the new economic edifice and superseded other considerations, beginning with the quality of information, because in the logic of surveillance capitalism, information integrity is not correlated with revenue.

This is the economic context in which disinformation wins. As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.

Facebook, the First Follower

Mr. Zuckerberg began his entrepreneurial career in 2003 while a student at Harvard. His website, Facemash, invited visitors to rate other students’ attractiveness. It quickly drew outrage from his peers and was shuttered. Then came TheFacebook in 2004 and Facebook in 2005, when Zuckerberg acquired his first professional investors.

Facebook’s user numbers quickly grew; its revenues did not. Like Google a few years earlier, Mr. Zuckerberg could not turn popularity into profit. Instead, he careened from blunder to blunder. His crude violations of users’ privacy expectations provoked intense public backlash, petitions and class-action suits. Mr. Zuckerberg seemed to understand that the answer to his problems involved human data extraction without consent for the sake of advertisers’ advantage, but the complexities of the new logic eluded him.

He turned to Google for answers.

In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.

A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained, according to David Kirkpatrick in “The Facebook Effect.”

The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”

Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.

TechCrunch summarized the corporation’s strategy: “Facebook is forcing users to choose their new privacy options to promote the ‘Everyone’ update, and to clear itself of any potential wrongdoing going forward. If there is significant backlash against the social network, it can claim that users willingly made the choice to share their information with everyone.”

Weeks later, Mr. Zuckerberg defended these moves to a TechCrunch interviewer. “A lot of companies would be trapped by the conventions and their legacies,” he boasted. “We decided that these would be the social norms now, and we just went for it.”

Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.

A Sweeping Economic Order

Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models. Google and Facebook are data companies and surveillance-capitalist pure plays. The others have varied lines of business that may include data, services, software and physical products. In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.

As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information. The promise of the surveillance dividend now draws surveillance economics into the “normal” economy, from insurance, retail, banking and finance to agriculture, automobiles, education, health care and more. Today all apps and software, no matter how benign they appear, are designed to maximize data collection.

Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic. The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.

All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations. With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.

The social effect is a new form of inequality, reflected in the colossal asymmetry between what these companies know about us and what we know about them. The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.

Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement. Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.

Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance. Early in the pandemic, for example, Apple and Google refused to adapt their operating systems to host contact-tracing apps developed by public health authorities and supported by elected officials. In February, Facebook shut down many of its pages in Australia as a signal of refusal to negotiate with the Australian Parliament over fees for news content.

That’s why, when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.

Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.

Democracy’s Counterrevolution

Democratic societies rived by economic inequality, climate crisis, social exclusion, racism, public health emergency and weakened institutions have a long climb toward healing. We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications. The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.

Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth. But like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.

Where does that leave us? Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution. But instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.

We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces. Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.

Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.” Remedies that focus on regulating extraction are content neutral. They do not threaten freedom of expression. Instead, they liberate social discourse and information flows from the “artificial selection” of profit-maximizing commercial operations that favor information corruption over integrity. They restore the sanctity of social communications and individual expression.

No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests. Regulating extraction would eliminate the surveillance dividend and with it the financial incentives for surveillance.

While liberal democracies have begun to engage with the challenges of regulating today’s privately owned information spaces, the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions: How should we structure and govern information, connection and communication in a democratic digital century? What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society? What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?

Liberal democracies should take the lead because they have the power and legitimacy to do so. But they should know that their allies and collaborators include the people of every society struggling against a dystopian future.

The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.

Will the call to “regulate Facebook” dissuade lawmakers from a deeper reckoning? Or will it prompt a heightened sense of urgency? Will we finally reject the old answers and free ourselves to ask the new questions, beginning with this: What must be done to ensure that democracy survives surveillance capitalism?

Shoshana Zuboff is the author of “The Age of Surveillance Capitalism” and a professor emeritus at Harvard Business School.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Tell us about yourself. Take the survey.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223409

File: 03ecf2e90b5b749⋯.png (247.25 KB,1308x1156,327:289,Screenshot_2025_03_08_at_0….png)

File: 3d418b9286751c0⋯.png (670.27 KB,1294x2710,647:1355,Screenshot_2025_03_08_at_0….png)

https://www.wired.com/2004/02/pentagon-kills-lifelog-project/

Pentagon Kills LifeLog Project

The Pentagon canceled its so-called LifeLog project, an ambitious effort to build a database tracking a person's entire existence. Run by Darpa, the Defense Department's research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, […]

SAVE

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED

THE PENTAGON CANCELED its so-called LifeLog project, an ambitious effort to build a database tracking a person's entire existence.

Run by Darpa, the Defense Department's research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, the plane tickets bought, the e-mail sent and received. Out of this seemingly endless ocean of information, computer scientists would plot distinctive routes in the data, mapping relationships, memories, events and experiences.

AI Lab Newsletter by Will Knight

WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.

SIGN UP

By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.

LifeLog's backers said the all-encompassing diary could have turned into a near-perfect digital memory, giving its users computerized assistants with an almost flawless recall of what they had done in the past. But civil libertarians immediately pounced on the project when it debuted last spring, arguing that LifeLog could become the ultimate tool for profiling potential enemies of the state.

TRENDING NOW

Patton Oswalt Answers The Web's Most Searched Questions

Researchers close to the project say they're not sure why it was dropped late last month. Darpa hasn't provided an explanation for LifeLog's quiet cancellation. "A change in priorities" is the only rationale agency spokeswoman Jan Walker gave to Wired News.

However, related Darpa efforts concerning software secretaries and mechanical brains are still moving ahead as planned.

LifeLog is the latest in a series of controversial programs that have been canceled by Darpa in recent months. The Terrorism Information Awareness, or TIA, data-mining initiative was eliminated by Congress – although many analysts believe its research continues on the classified side of the Pentagon's ledger. The Policy Analysis Market (or FutureMap), which provided a stock market of sorts for people to bet on terror strikes, was almost immediately withdrawn after its details came to light in July.

"I've always thought (LifeLog) would be the third program (after TIA and FutureMap) that could raise eyebrows if they didn't make it clear how privacy concerns would be met," said Peter Harsha, director of government affairs for the Computing Research Association.

ADVERTISEMENT

"Darpa's pretty gun-shy now," added Lee Tien, with the Electronic Frontier Foundation, which has been critical of many agency efforts. "After TIA, they discovered they weren't ready to deal with the firestorm of criticism."

That's too bad, artificial-intelligence researchers say. LifeLog would have addressed one of the key issues in developing computers that can think: how to take the unstructured mess of life, and recall it as discreet episodes – a trip to Washington, a sushi dinner, construction of a house.

"Obviously we're quite disappointed," said Howard Shrobe, who led a team from the Massachusetts Institute of Technology Artificial Intelligence Laboratory which spent weeks preparing a bid for a LifeLog contract. "We were very interested in the research focus of the program … how to help a person capture and organize his or her experience. This is a theme with great importance to both AI and cognitive science."

To Tien, the project's cancellation means "it's just not tenable for Darpa to say anymore, 'We're just doing the technology, we have no responsibility for how it's used.'"

Private-sector research in this area is proceeding. At Microsoft, for example, minicomputer pioneer Gordon Bell's program, MyLifeBits, continues to develop ways to sort and store memories.

David Karger, Shrobe's colleague at MIT, thinks such efforts will still go on at Darpa, too.

"I am sure that such research will continue to be funded under some other title," wrote Karger in an e-mail. "I can't imagine Darpa 'dropping out' of such a key research area."

Pentagon Wants to Make a New PAL

Pentagon Alters LifeLog Project

A Spy Machine of DARPA's Dreams

Hide Out Under a Security Blanket

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223410

File: e7eb3e9949eb618⋯.png (728.1 KB,1214x1982,607:991,Screenshot_2025_03_08_at_0….png)

https://web.archive.org/web/20031211153723/https://www.darpa.mil/ipto/Programs/lifelog/index.htm

LifeLog is one part of DARPA’s research in cognitive computing.(DARPA)

The research is fundamentally focused on developing revolutionary capabilities that would allow people to interact with computers in much more natural and easy ways than exist today.

This new generation of cognitive computers will understand their users and help them manage their affairs more effectively. The research is designed to extend the model of a personal digital assistant (PDA) to one that might eventually become a personal digital partner.

LifeLog is a program that steps towards that goal. The LifeLog Program addresses a targeted and very difficult problem: how individuals might capture and analyze their own experiences, preferences and goals. The LifeLog capability would provide an electronic diary to help the individual more accurately recall and use his or her past experiences to be more effective in current or future tasks.

Program Description:

The goal of the LifeLog is to turn the notebook computers or personal digital assistants used today into much more powerful tools for the warfighter.

The LifeLog program is conducting research in the following three areas:

Sensors to capture data and data storage hardware

Information models to store the data in logical patterns

Feature detectors and classification agents to interpret the data

To build a cognitive computing system, a user must store, retrieve, and understand data about his or her past experiences. This entails collecting diverse data, understanding how to describe the data, learning which data and what relationships among them are important, and extracting useful information. The research will determine the types of data to collect and when to collect it. The goal of the data collection is to “see what I see,” rather than to “see me”. Users are in complete control of their own data collection efforts, decide when to turn the sensors on or off, and decide who will share the data.

Program Impact:

LifeLog technology will be useful in several different ways. First, the technology could result in far more effective computer assistants for warfighters and commanders because the computer assistant can access the user's past experiences. Second, it could result in much more efficient computerized training systems - the computer assistant would remember how each individual student learns and interacts with the training system, and tailor the training accordingly.

References:

Vannevar Bush's MEMEX (1945)

http://www.theatlantic.com/unbound/flashbks/computer/bushf.htm

J.C.R. Licklider's OLIVER (1968)

http://memex.org/licklider.pdf

Donald Norman's Teddy (1992)

http://www.jnd.org/TurnSignals/TS-TheTeddy.html

Gordon Bell's MyLifeBits (2002)

http://research.microsoft.com/research/barc/MediaPresence/

MyLifeBits.aspx

UK CRC Grand Challenge "Memories for Life" (2003)

http://www.nesc.ac.uk/esi/events/Grand_Challenges/

proposals/Memories.pdf

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223411

>>223410

Was RIGHT on the money with Vannevar Bush's stuff. (not that it was THAT big of a leap)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223412

File: 026b12d6ea6ffa2⋯.png (479.53 KB,1328x1880,166:235,Screenshot_2025_03_08_at_0….png)

>>223410

https://web.archive.org/web/20060329172004/http://www.nesc.ac.uk/esi/events/Grand_Challenges/proposals/Memories.pdf

“Memories for life”

Managing information over a human lifetime

Andrew Fitzgibbon (awf@robots.ox.ac.uk)

Ehud Reiter (ereiter@csd.abdn.ac.uk)

May 22, 2003

People are capturing and storing an ever-increasing amount of information about them-

selves, including emails, web browsing histories, digital images, and audio recordings.

This tsunami of data presents numerous challenges to computer science, including:

how to physically store such “digital memories” over decades; how to protect privacy,

especially when data such as photos may involve more than one person; how to extract

useful knowledge from this rich library of information; how to use this knowledge

effectively, for example in knowledge-based systems; and how to effectively present

memories and knowledge to different kinds of users. The unifying grand challenge is

to manage this data, these digital memories, for the benefit of human life and for a

lifetime.

For example, it is now possible to store every digital photograph one takes. A near-

term challenge is to reliably organise and search an “infinite photo album”. Searching

of textual information is well understood, but indexing of images and audio remains

an open problem. This challenge will be forced upon us in this decade; however the

optimal course of action is far from obvious, and the efforts of researchers from across

all computing disciplines will be needed to ensure a successful outcome. In the longer-

term, we might extract and indeed create not just individual photographs, but connected

stories, such as “my son learns to swim.” This requires semantic analysis of images

and other memory data to understand how they are connected, and which are most

important to the story. The second strand of the challenge is to learn these semantic

rules from the memories themselves.

The wider challenge is to extract knowledge from the data, and use this knowledge

to build more intelligent tools. For example, we have for decades been able to build

medical diagnosis systems that outperform most human doctors, if they have access to

sufficient data about the patient. Unfortunately the data needed is almost never avail-

able, because it includes information that currently must be entered manually, such as

the patient’s visual appearance, behaviour, environment, and history. If this information

could be obtained automatically by analysing stored memories, it would revolutionise

medical informatics and indeed medicine.

The creation, analysis, and usage of very large data sets is currently a hot topic

in many areas of computer science, and recent advances are part of why we can hope

This document is based on the UKCRC (UK Computing Research Committee’s) Grand Challenges

in Computing workshop. It attempts to integrates ideas put forward by many people in submissions

to the workshop, in discussions at the workshop itself, and in contributions to a web-based discussion

forum after the workshop. For more information on UKCRC and its Grand Challenges initiative, see

http://www.ukcrc.org.uk

1

to meet this challenge over the next two decades. For example, it is now possible

to learn to identify individual objects in a photograph, a task which was previously

thought to require human-like intelligence. Given several thousand photos that contain

Ringo (some of which may also contain other people, such as Paul and John), and

labels that indicate who is in each photo, we can use machine learning techniques

to automatically construct a “Ringo-detector” that reliably identifies areas containing

“Ringo” in new images. Learning techniques require large amounts of data, and this

will be a natural byproduct of the management of digital memories. The digital archive

of even one person in the year 2019 is likely to consist of petabytes of linked images,

documents and audio; the potential for extracting useful knowledge from this archive

is stupendous, and only limited by our imagination.

Digital memories clearly offer tremendous potential for science and technology.

We must also ensure that they help society by widening access to information tech-

nology, so that everyone, not just well-educated people with no disabilities in rich

countries, could benefit from the information revolution. The challenge is to develop

detailed models of an individual’s abilities, skills, and preference by analysing his or

her digital memories; and to use these models to optimise computer systems for indi-

viduals. For example, a short-term challenge could be to develop a model of a user’s

literacy level by analysing examples of what he or she reads and writes, and linguis-

tically simplify web pages based on this model; this would help the 20% of the UK

population with poor literacy. A longer-term challenge might be presenting a story ex-

tracted from memories in different modalities according to ability and preference; for

example, as an oral narrative in the user’s native language, or as a purely visual virtual

reality reconstruction for people such as aphasics who have problems understanding

language. Limited examples of such systems can be built now; the challenge is in min-

ing the wealth of information latent in digital memories so that fully competent systems

could be in use in fifteen years.

From a scientific perspective, this proposal is a challenge for many areas of com-

puting research. The above examples have alluded to some of the specific scientific

challenges in artificial intelligence, information retrieval, and human-computer inter-

action, but there are many challenges for other areas of computer science as well. For

example, we will need to develop techniques for storing large amounts of complex data

over decades and indeed centuries, in a manner that is robust to changes in hardware,

operating systems, and indexing strategies. The computer and programs which operate

on the data will change frequently over a human lifetime, but the data must outlast the

systems which analyse it. Questions will be asked of the data which were not pre-

dicted when the data was indexed, so the indexing strategies must change over time.

Security research must face the challenge of protecting information over decades, in a

way that is robust to advances in computational power or mathematical knowledge, but

without imposing untenable constraints on the user’s activity; and also the challenge

of rigorously proving to a sceptical public that their memories are secure from hack-

ers, amoral companies, and “Big Brother” governments. These are just a few of the

scientific challenges of Memories for Life, more are listed below.

Memories for Life also raises public policy issues which must be addressed, in

particular about control and access rights. For example, should courts or the police have

the right to access memories that are relevant to a legal case or criminal investigation?

Should people who are included in another person’s memories (in a digital photograph,

for example) have any control over how these memories are used? Should aggregate

information from memories be made available for medical and other kinds of scientific

research? Such issues must be resolved in a way that is satisfactory to the community,

2

in order for Memories for Life to reach its scientific and technological potential.

Meeting the challenge requires the convergence of a number of disparate threads of

scientific research. Acceptance of the agenda among the wider scientific community

and the public must be as important a consideration as scientific viability. The chal-

lenge will be met when the majority of people can efficiently manage their information

stream, and when all of us can benefit from our digital memories.

Exemplars of the Challenge

This proposal develops this programme using illustrative “exemplars”: specialist sys-

tems or tools which encapsulate the essence of the challenge. Each exemplar has access

to all data stored about an individual, or to a subset pertinent to some aspect of the in-

dividual’s life, such as their personal or professional activities. Themes which link the

exemplars include:

The deep, persistent model of the user which is inherent in the digital memories,

but which will be differently mined by each challenge;

Sensory interaction between the user and computer which adjusts to the abilities

of each user, including visual, aural and haptic interactions, and which allows all

people access to digital information;

Extraction of deep structure from the repository of memories, first to index the

information, and then to present new views of the knowledge embedded therein;

Adaptation of the representations to allow tasks whose specifications continually

evolve, and for which the appropriate algorithms and data structures can not be

known at the time the representation is first designed.

The exemplars are presented as addressing specific tasks, however each implies several

scientific advances which have wider scope than the narrow domain of the exemplar.

The challenges are labelled with an approximate time to completion, but many might

be expected to remain research topics well into the 21st century.

Multimedia searching (5-year challenge): Search for images or audio by presenting

examples, rather than text. Current technology uses textual annotations of non-text

data, which are expensive to produce and can never be complete: an image of Mardi

Gras may not be labelled “people”; the description of the “Mona Lisa” might run to

more than a thousand words.

Electronic GP (10-year challenge): Analyse stored memories to create a model of a

person’s activities, life style, behaviour and health. Use this information to give advice

when the person has health concerns; as noted above, the biggest problem with current

computerised medical diagnosis systems is lack of information. Advice could include

diagnosis, referral to specialists, and suggested lifestyle changes; it should be presented

in a manner that is relevant to the person’s situation and knowledge.

Stories from a Life (15-year challenge): Analyse stored memories and re-present

them as “stories”. These may use a different modality from the original memory (eg,

textual story from visual memory), and should be tailored for different audiences (eg,

grandchildren vs. police) and contexts. Stories are created either automatically or

under the guidance of the individual. Other people’s “memories” may be accessed

when allowed and desirable. Many older people in particular might this very valuable.

Personal Simplified Web Pages (5-year challenge): Acquire a model of an indi-

vidual’s linguistic competence from the stored memories, especially emails and audio

3

data. Use this model to linguistically simplify Web pages, for example replacing words

the individual may not know with paraphrases using simpler words.

“Newpaper” (10-year challenge): Develop “smart electronic paper” that lets any-

one (even people without formal IT education) write down thoughts, scribblings, draw-

ings, or whatever, and have these incorporated into the person’s digital memories. In

addition to “clean sheet” information, newpaper also lets people display and annotate

existing documents, including their other memories. Newpaper can be used anywhere

(bus, bed, bath), so people always have access to their memories

Intelligent Mathematics Tutor for Children (15-year challenge): Analyse stored

memories of all of a child’s mathematical attempts, including both formal schoolwork

and real-life usage of mathematics. Create a model of the child’s mathematical knowl-

edge, and use this to drive an intelligent tutoring system. Personalise examples and

feedback from the tutoring system based on what the child is interested in and on what

he has recently done.

Aid for Elderly with Short-Term Memory Problems (5-year challenge): Analyse

stored memories, including visual data, to create a schedule of the person’s typical

day. Monitor his or her activities, and help the person through the day by prompting.

Also alert the person or his/her carer if there is a significant deviation from the normal

schedule which is likely to be a consequence of short-term memory problems.

Virtual Memories (10-year challenge): Create a virtual world that represents an

incident from a person’s life, using stored memories of that incident. Reconcile and

integrate memories of different modalities (eg, video and emails), and interpolate as

necessary in space and time to fill in gaps. For example, a 3D birthday party, or an

action replay of one’s greatest sporting moment.

4

Answers to Questions

“Is it driven by curiosity about the foundations, applications or limits of basic

Science?”

In considering how digital memories can be used to build systems which embody

human-like knowledge, it might be thought that the grand challenge is to understand

how information becomes knowledge. However, such goals are too grand. The chal-

lenge is driven by the need to overcome the limitations inherent in our current tech-

nologies for managing enormous, heterogeneous, and continually expanding informa-

tion repositories. Overcoming these limits will certainly require basic research in many

areas of computer science, for example:

Database systems: We need to store many different types of data (text, audio,

visual, log files) over a very long period of time (a lifetime and beyond). How

can we do this in a manner which easily adapts to new hardware and software,

which easily allows new types of information to be integrated when technology

advances, and which easily allows new types of questions to be asked as society

changes?

Security: How can we protect people’s privacy, especially when one person’s

“memories” contain information about someone else? What control should peo-

ple have over information about them in other people’s memories, and how can

this control be implemented? How can we prove both to the scientific community

and to the general public that memories are secure from attackers?

Operating systems: A person’s memories will contain petabytes of data and last

for decades. How should this data be distributed across physical filestores, in a

way that maximises accessibility and reliability (including reliability in the face

of “once in a century” disasters)? Peer-to-peer networking is likely to be an

important contributor to the system.

Artificial intelligence: How can we interpret audio and visual data, with a min-

imal amount of annotation and guidance from the person? How can we learn

useful generalisations from the interpreted data, and how can we represent and

reason with these generalisations?

User modelling: How can we represent people’s knowledge, experiences, be-

liefs, emotions, intents, abilities, and so forth in a coherent and unified fashion?

How should we update such models when people change?

Human-computer interaction: What is the best way from an HCI perspective for

annotating and searching memories? How will new sensor technologies such as

haptic interfaces be integrated? How should we adapt interfaces, web pages, and

documentation so that they are well matched to the information about people that

can be extracted from their memories (see user modelling above). In particular,

how can we support people with disabilities or skill impairments, or people in

stressful environments?

Graphics and virtual reality: Given such a rich source of knowledge about the

world, how can we use it to build virtual models of the world that integrate all

the different types of sensory information in a coherent and consistent way?

5

“Does it promise a revolutionary shift in the accepted paradigm of thinking or

practice?”

The challenge depends on a paradigm shift: from the concept of a “computer” for

life, to a “memory” for life: an information repository which is conceptually separate

from the computers which manage that repository. In turn, we must stop thinking of

information in terms of disjoint data types (images, audio recordings, text files, web

pages), and instead think holistically of information as giving different perspectives on

people, events, and the world.

“Will its promotion as a Grand Challenge contribute to the progress of Science?”

Memories for Life will help science as a whole in many ways. Firstly, the data col-

lected will itself be an invaluable resource for the cognitive sciences. To take one small

example, detailed long-term data about individuals would be a tremendous resource for

longitudinal studies about child development, disease progression, and so forth,

Secondly, better techniques for managing and analysing large data sets would be

very helpful to many fields of science, ranging from genomics to economics. We live

in an era where the amount of scientific data is exploding exponentially; managing and

analysing this data is a challenge for all of science.

Finally, an argument could be made that the organisation of information is a fun-

damental constituent of scientific enquiry, and that a challenge to automate that organ-

isation is essentially a challenge to understand the mechanisms which underly human

thought. While this is not a goal of this challenge, it again indicates the importance of

information management to all scientific endeavour. More practically, if some prob-

lems which currently require human intelligence are found to be amenable to compu-

tation, the implications for neuroscience and psychology will be significant.

“Does it have the enthusiastic support of established scientific communities? ”

The challenge speaks to over 30% of the submissions to the UKCRC Grand Challenges

workshop and provides a provocative position on the nature of computer science. The

specific scientific challenges we outline above are extensions of key goals in many

areas of Computer Science.

“Does it appeal to the imagination of the general public?”

We hope the individual exemplars will excite people, because they can see the benefit

of things like multimedia searching or intelligent tutoring. On a broader scale, our

memories are what define us, and we believe that with careful presentation, the general

public would be excited and inspired by our vision of computing with digital memories.

Certainly the amount of media attention received by Microsoft’s MyLifeBits project1

suggests that the media consider this topic to be of widespread interest.

People will certainly be concerned about the extent to which information about

them is captured, and about who owns and has access rights to this information. Such

issues are already being widely debated in the popular press. This is a public policy

issue, not a scientific one, but people may be hostile to this challenge unless they feel

confident that they have sufficient control over their memories.

1http://research.microsoft.com/barc/mediapresence/mylifebits.aspx

6

“Does it avoid duplicating evolutionary development of commercial products?”

Certainly there is commercial interest in better ways of organising and searching per-

sonal archives of photographs and emails. Many commercial enterprises are built

around search technology, and have an interest in extending the types of data which

can be searched, or in moving the technology onto our local disks. Although most

commercial R&D is focused on short-term incremental improvements to search tech-

nology, Microsoft in particular is also working on longer-term research in this area, in

the MyLifeBits project mentioned above.

We welcome the interest of Microsoft and other companies in this area, but we be-

lieve that the extent of the challenge means that the path to a successful implementation

of any exemplar will involve many false starts, and will require many minds working

in parallel to achieve results. A single commercial organisation would find it difficult

to take the risk that many person-years of work might lead to an unsaleable result.

The development of universally trusted security protocols may require that no central

authority controls privacy, and may depend on open standards and open research and

development models which conflict with commercial objectives.

“When was it first proposed as a challenge? Why has it been so difficult so far?

Why is it now expected to be feasible in a ten to fifteen year timescale? ”

An argument could be made that this was first proposed by Vannevar Bush in the 1940s.

But it is only now that technology (disk capacity, sensors, processor speed) permits the

acquisition and storage of large diverse collections of digital memories. We are just

at the point in time where an individual could stream video to disk at ISDN rates and

never be limited by disk capacity. Thus now is certainly the time to start managing these

information collections. Confidence in the feasibility of the programme is reinforced

by recent advances in areas such as search technology, computer vision and graphics,

natural language processing, among others. Recent successes in machine learning and

statistics have indicated that many of the problems that were previously considered

hard enough to require cognition are in fact soluble in a purely data-driven manner.

“What are the most likely reasons for failure?”

Outright failure is unlikely—information management will always improve. It may

prove, however, that some tasks cannot be solved without human cognition, and cannot

be usefully automated. On the other hand, current progress in machine learning offers

hope that this set of tasks is smaller than previously thought. On a larger scale, a

significant risk is perhaps public hostility if the privacy implications are not carefully

addressed.

“Is there a clear criterion for the success or failure of the project after fifteen

years?”

Success criteria can be defined for the individual exemplars. For example, an Electronic

GP could be evaluated by comparing the quality of its advice to that given by a human

doctor; and Virtual Memories could be evaluated by asking people to rate their fidelity

and internal consistency. It is more difficult to evaluate the challenge as a whole, since

it is primarily a research direction and framework. However, certainly one measure of

success is the degree to which algorithms and representations are shared and reused by

the different exemplars.

7

“What kind of long-term benefits to science, industry, or society may be expected

from the project even if it is only partial successful?”

Each of the above exemplars and scientific challenges should produce useful science

and technology even if only partially successful. For example, even a partially suc-

cessful “Personal Simplified Web pages” should give us a much better scientific under-

standing of how language use varies among individuals, and how web pages should be

written to be accessible to as many people as possible.

“Does it have international scope?”

This challenge is of interest to researchers around the world. Indeed, DARPA in the US

currently has a “Lifelog” programme2 which is similar to Memories for Life in many

ways, although focused more on the next 5 years than the next 20 years. Memories

for Life is a general challenge about the acquisition, storage, and use of data about

individuals; there is nothing in it that is specific to the UK or any other country. Re-

search, wherever conducted, which makes a significant advance will be internationally

applauded. There is good scope for involving developing as well as developed coun-

tries in aspects of this challenge, for example user models that incorporate cultural

information about people.

“What calls does it make for collaboration of research teams with diverse skills?”

This challenge is particularly inclusive in that it naturally offers a place for many as-

pects of computer science. It is not narrowly focused on one subarea of computing,

we believe that most computing science researchers will be able to contribute to the

challenge.

“How can it be promoted by competition between teams with diverse approaches?”

The number of approaches which exist for the comparison and summarisation of non-

text data is already large. Many blind alleys will be encountered in the search for

efficient and reliable information management techniques. No single team can explore

all the possibilities, but any individual advance will assist all researchers working on

the challenge. The separation into exemplars makes the work naturally divisible into

independent research efforts, and thus naturally allows collaboration and competition.

“What are the first steps?”

One obvious next step is to organise workshop(s) on Memories for Life, to bring inter-

ested researchers (international as well as UK) together and further develop the chal-

lenge. Beyond this, several contributors to our web discussion suggested that we con-

sider creating an example Memories for Life corpus, which researchers could use to

develop and test algorithms, applications, and so forth. Certainly corpora have been

extremely valuable in many other areas of Computing research. Creating a corpus

for Memories for Life would be expensive, time-consuming, and perhaps beyond the

means of most individual research groups; but if such a corpus was created, individual

groups could use it to investigate the research issues mentioned above.

2http://www.darpa.mil/ipto/Solicitations/PIP03-30.html

8

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
Post last edited at

cb0e92 No.223415

File: cf81fb2ba39f2ef⋯.png (521.54 KB,1440x1874,720:937,Screenshot_2025_03_08_at_0….png)

File: 1ce3a14a42c8e8c⋯.png (53.02 KB,602x224,43:16,Screenshot_2025_03_08_at_0….png)

https://web.archive.org/web/20031009041623/https://www.wired.com/news/privacy/0,1848,59724,00.html

Pentagon Wants to Make a New PAL

By Noah Shachtman | Also by this reporter Page 1 of 1

02:00 AM Jul. 23, 2003 PT

The Pentagon is doling out $29 million to develop software-based secretaries that understand their bosses' habits and can carry out their wishes automatically.

Carnegie Mellon University's School of Computer Science will get $7 million to build a Perceptive Assistant that Learns, or PAL, a kind of digital flunky that can schedule meetings, maintain websites and reply to routine e-mail on its own. A total of $22 million is going to SRI International, Dejima and a coalition of other researchers for the construction of a wartime PAL.

* Story Tools

[Print story] [E-mail story]

* See also

Funding for TIA All But Dead

Pentagon Alters LifeLog Project

A Spy Machine of DARPA's Dreams

Keep an eye on Privacy Matters

* Today's Top 5 Stories

Music Label Cashes in by Sharing

AAA Battery Gets a Mini-Me

China's Great Leap Upward

How Computer Chips Keep Cool

Open Access? Not Anytime Soon

The efforts could make leaders in the boardroom and on the battlefield more efficient, says the Defense Advanced Research Projects Agency, or Darpa. But some defense analysts are finding it hard to see the military value in such a system.

Digital assistants have been a Darpa focus of late. The controversial, all-encompassing LifeLog project is also supposed to lead to the construction of a computerized helper. LifeLog's goal is to digitally capture and categorize every aspect of people's lives, from the TV shows they watch to the places they visit. The more information the assistant has about its boss, the argument goes, the more useful it can be.

"The idea is to develop a system that will adapt to the user, instead of the other way around," said Antoine Blondeau, president of Dejima, a software development firm in San Jose, California, that is working on the PAL effort.

According to Darpa spokeswoman Jan Walker, PAL originally was thought of as an office assistant, to set up meetings, handle correspondence and help write quarterly reports. Commercial software e-mail and scheduling programs, for example will be adapted for PAL purposes. To these will be added modules that will train the software to its user's preferences and components that will decide when to interrupt the boss with questions.

The program "must respond to specific instructions i.e., 'Notify me as soon as the new budget numbers arrive by e-mail' without the need for reprogramming," Carnegie Mellon computer scientist Scott Fahlman said in a statement.

"The point is to do all of the things a human assistant would do. If a meeting gets canceled, it would notify the appropriate people, de-schedule a (conference) room, maybe change your trip schedule. If you turned down another invitation because you were busy with this meeting, it might remind you of that," said artificial-intelligence authority and longtime Darpa contractor Doug Lenat. He's not directly associated with the PAL project, but Lenat is bidding on the LifeLog program.

To Steven Aftergood, a defense analyst with the Federation of American Scientists, the PAL program does little to help the Pentagon in its mission to combat America's adversaries.

"Darpa obviously takes a very broad view of its charter. Organizing e-mail? Allocating office space? These are to Darpa's mission what Tang is to the space program," he wrote in an e-mail.

Agency representative Walker disagreed. A headquarters commander "has a large staff that supports him – finding information, sorting through it, collating it and advising him," she said.

Once fully implemented, a PAL could "cut down on the number of staff in a command center," she continued. "And that could make the command center smaller, more mobile and, therefore, less vulnerable."

GlobalSecurity.org director John Pike a critic of many Darpa projects, including LifeLog sees a second possible use for the digital assistant.

The Army has a doctrine, or battlefield rules, for just about any combat situation one can imagine, Pike noted. And soldiers are supposed to follow those tenets strictly.

"But you look at all those field manuals they got, and, jeez Louise, there's no way anyone could memorize all that," he said.

"This could be the little man whispering in your ear, telling you what to do next."

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.223466

File: 6c9b2f5dcc5babe⋯.png (42.99 KB,492x292,123:73,Screenshot_2025_03_08_at_2….png)

932

The creation of the internet and ‘connecting’ platforms is bringing about their downfall.

Failure to control.

MSM is dead.

#internetbillofrights

Q

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.231831

File: 58d2a1c7e54d95e⋯.png (405.23 KB,508x1624,127:406,Screenshot_2025_06_09_at_1….png)

>>197806

1042

The attached article is all true….

does anyone remember how Facebook became famous? It was the CIA Clown run OP –– Virginia Tech University Shooting that put FB on the map….

Fugitive 'Facebook founder' says he's alive and well but 'running for his life' from CIA because of its secret involvement in the social media site

http:// www.dailymail.co.uk/news/article-3747202/Paul-Ceglia-supposed-Facebook-founder-disappeared-2015-says-s-running-CIA-want-kill-knowledge-involved-social-media-site.html

Image Search Tags:

>>921715

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cb0e92 No.232171

File: 46a5e2788058324⋯.png (680.48 KB,510x1878,85:313,Screenshot_2025_06_13_at_0….png)

1338

>>1367898

F9

Falcon 9

China

Image Search Tags:

>>1368028

Explore further.

Q

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

35d3c7 No.234346

File: 0697bd90084e781⋯.png (468.96 KB,654x756,109:126,Screenshot_2025_07_06_at_0….png)

File: f2f72a2323dc61e⋯.png (407.79 KB,760x2114,380:1057,Screenshot_2025_07_06_at_0….png)

File: 738a4a051fce769⋯.png (378.28 KB,720x2002,360:1001,Screenshot_2025_07_06_at_0….png)

File: 7c02838ec13a51b⋯.png (451.14 KB,722x2210,361:1105,Screenshot_2025_07_06_at_0….png)

http://web.archive.org/web/20190115101346/https://www.wired.com/story/twitter-location-data-gps-privacy/

YOUR OLD TWEETS GIVE AWAY MORE LOCATION DATA THAN YOU THINK(Wired)

CASEY CHIN; GETTY IMAGES

AN INTERNATIONAL GROUP of researchers has developed an algorithmic tool that uses Twitter to automatically predict exactly where you live in a matter of minutes, with more than 90 percent accuracy. It can also predict where you work, where you pray, and other information you might rather keep private, like, say, whether you’ve frequented a certain strip club or gone to rehab.

The tool, called LPAuditor (short for Location Privacy Auditor), exploits what the researchers call an "invasive policy" Twitter deployed after it introduced the ability to tag tweets with a location in 2009. For years, users who chose to geotag tweets with any location, even something as geographically broad as “New York City,” also automatically gave their precise GPS coordinates. Users wouldn’t see the coordinates displayed on Twitter. Nor would their followers. But the GPS information would still be included in the tweet’s metadata and accessible through Twitter’s API.

Twitter didn't change this policy across its apps until April of 2015. Now, users must opt-in to share their precise location—and, according to a Twitter spokesperson, a very small percentage of people do. But the GPS data people shared before the update remains available through the API to this day.

The researchers developed LPAuditor to analyze those geotagged tweets and infer detailed information about people’s most sensitive locations. They outline this process in a new, peer-reviewed paper that will be presented at the Network and Distributed System Security Symposium next month. By analyzing clusters of coordinates, as well as timestamps on the tweets, LPAuditor was able to suss out where tens of thousands of people lived, worked, and spent their private time.

A member of Twitter's site integrity team told WIRED that sharing location data on Twitter has always been voluntary and that the company has always given users a way to delete that data in its help section. "We recognized in 2015 that we could be even clearer with people about that, but our overarching perspective on location sharing has always been that it’s voluntary and that users can choose what they do and don't want to share," the Twitter employee said.

It's true that it's always been up to users to geotag their tweets or not. But there's a big difference between choosing to share that you're in Paris and choosing to share exactly where you live in Paris. And yet, for years, regardless of the square mileage of the locations users chose to share, Twitter was choosing to share their locations down to the GPS coordinates. The fact that these details were spelled out in Twitter's help section wouldn't do much good to users who didn't know they needed help in the first place.

"If you're not aware of the problem, you're never going to go remove that data," says Jason Polakis, a co-author of the study and an assistant professor of computer science at the University of Illinois at Chicago specializing in privacy and security. And according to the study, that data can reveal a lot.

In November of 2016, well after Twitter changed its settings, Polakis and researchers at the Foundation for Research and Technology in Crete began pulling Twitter metadata from the company’s API. They were building on prior research that showed it was possible to infer private information from geotagged tweets, but they wanted to see if they could do it at scale and with more precision, using automation.

The researchers analyzed a pool of about 15 million geotagged tweets from about 87,000 users. Some of the location data attached to those tweets may have come from users who wanted to share their exact locations, like, say, a museum or music venue. But there were also plenty of users who shared nothing more than a city or general vicinity, only to have their GPS location shared anyway.

From there, LPAuditor set to work assigning each tweet to a physical spot on a map, and locating it by time zone. That generated clusters of tweets around the map, some busier than others, indicating locations where a given user spends a lot of time—or at least, a lot of time tweeting.

"If you're not aware of the problem, you're never going to go remove that data."

JASON POLAKIS, UNIVERSITY OF ILLINOIS AT CHICAGO

To predict which cluster might correspond to a user’s home, the researchers directed LPAuditor to look for locations where people spent the longest time span tweeting over the weekend. The thinking was: During the week, you might tweet in the morning, at night, and on your day off, in an unpredictable pattern, but home is where most people spend the bulk of their time on weekends.

When it came to finding work locations, they did the opposite, analyzing tweet patterns during the week. LPAuditor analyzed the locations where users tweeted the most (not including home), then studied the time frames during which those tweets were sent. That gave the researchers a sense of whether the tweets might have been sent over the course of a typical eight-hour shift, even if that shift was overnight. Finally, the tool looked for the time frame that appeared most often during the week and decided that the location with the most tweets in that time frame was most likely the person’s place of work.

When it came time to check their answers, the researchers identified a group of roughly 2,000 users to serve as a sort of ground truth. Compiling this group was a manual process that required two graduate students to independently sift through all of the tweets in the collection to find key phrases that might confirm a person really was home or at work when they sent it. Terms like, “I’m home” or “at the office," for instance, might provide a clue. They inspected each tweet for context that might provide additional information.

They then compared the locations of those tweets to the tool's predictions and found they were highly accurate, identifying people’s homes correctly 92.5 percent of the time. It wasn’t as good at predicting where people worked, getting that right just 55.6 percent of the time. But that, Polakis says, could simply mean that the location they identified as “work” is actually a school or a place where the person spends what would otherwise be working hours.

Finally, the researchers set about identifying sensitive locations a user might have visited. To do that, they compared the tweet locations to Foursquare’s directory of businesses and venues. They were looking for places like hospitals, urgent care centers, places of worship, and also strip clubs and gay bars. Any venue that appeared within 27 yards of the geotagged tweet would be considered as a potential location. Then, they conducted a similar keyword analysis, searching for words associated with health, religion, sex, and nightlife, to check whether a user was likely where they seemed to be. Using this method, the researchers found that LPAuditor was right about sensitive locations about 80 percent of the time.

Of course, if a user is tweeting about, say, being at the doctor while they’re at the doctor, one might argue that they’re not so concerned about privacy. But Polakis says, “The location might give away more information than the user wants to say.” In one case, the researchers found a user who was tweeting about a doctor from a location that the GPS coordinates revealed to be a rehab facility. “That’s a lot more sensitive context than what they were willing to disclose,” he says.

Even when the tweet doesn’t include context clues, LPAuditor was still able to predict whether a person had actually spent time at a sensitive location by studying the duration of time that people spent there and the number of times they returned. The researchers were, however, unable to measure the accuracy of these specific predictions.

The majority of this research was based on tweets that were sent prior to Twitter's policy change in April 2015. That change, Polakis says, made a huge difference in terms of how much precise location data was available through the API. To measure just how huge, the researchers excluded all of the tweets they collected prior to April 2015 and found that they were only able to positively identify key locations for about one-fifteenth of the users they were studying. In other words, Polakis says, "That kind of invasive Twitter behavior increased the amount of people we could attack by 15 times."

SIGN UP TODAY

SIGN UP FOR THE DAILY NEWSLETTER AND NEVER MISS THE BEST OF WIRED.

The fact that Twitter changed its policies is a good thing. The problem is, so much of that pre-2015 location data is still available through the API. Asked why Twitter didn't scrub it after changing the policy, the Twitter site integrity employee said, "We didn’t feel it would be appropriate for us to go back and unilaterally make the decision to change people’s tweets without their consent."

This is not the first study to reveal what can be inferred from location data, or even geotagged tweets. But, according to Henry Kautz, a computer scientist at the University of Rochester who has conducted similar research, this paper makes key contributions. "The advancement here is that they studied two types of locations—work and home—rather than one, and they did a larger study with a more systematic evaluation and a more highly tuned algorithm, so it got the right answer a higher percentage of the time," Kautz says. LPAuditor isn't exclusive to Twitter data either. It could be applied to any set of location data.

Kautz argues that Twitter is of relatively small concern compared to other apps that continue to use invasive location data practices today. Government officials in Los Angeles recently filed a lawsuit against the IBM-owned Weather Channel app for allegedly collecting and selling users' geolocation data under the guise of helping users "personaliz[e] local weather data, alerts, and forecasts." And just this week, Motherboard reported that bounty hunters are using location data purchased from T-Mobile, Sprint, and AT&T to track individuals using their phones. That's despite the companies' public promises to stop selling such data. Then, of course, there are apps that get infected with malware and gobble up location data.

"The big problem today is not nefarious people looking at your geotagged tweets. The problem is compromised cell phone apps that steal your entire GPS history," Kautz says. "From that data one can extract not just your home and work locations, but a huge number of significant places in your life."

And yet, Polakis says the fact that Twitter no longer attaches GPS coordinates to all geotagged tweets isn't enough, given that developers still have access to years' worth of data from before 2015. Yes, some of that information might now be stale. People move. They change jobs. But even outdated information can be useful to an attacker, and other sensitive information, like, say, a person's sexuality, seems unlikely to change. This study proves that not only is it possible to infer this kind of information from location data, but that a machine can do it almost instantly.

For now, Polakis says, the most people can do is delete their location data today—and think twice before sharing it in the future.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

14f28b No.234912

File: 13b419bb524430c⋯.png (490.29 KB,1016x984,127:123,Screenshot_2025_07_13_at_0….png)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
[]
[ / / / / / / / / / / / / / ] [ r8k / ck / wooo / fit / random / fit / jewess / pone / qhaos / r8k / warroom / x ]