Peter G. Neumann
SRI International
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter G. Neumann.
fall joint computer conference | 1965
Robert C. Daley; Peter G. Neumann
The need for a versatile on-line secondary storage complex in a multiprogramming environment is immense. During on-line interaction, user-owned off-line detachable storage media such as cards and tape become highly undesirable. On the other hand, if all users are to be able to retain as much information as they wish in machine-accessible secondary storage, various needs become crucial: Little-used information must percolate to devices with longer access times, to allow ample space on faster devices for more frequently used files. Furthermore, information must be easy to access when required, it must be safe from accidents and maliciousness, and it should be accessible to other users on an easily controllable basis when desired. Finally, any consideration which is not basic to a users ability to manipulate this information should be invisible to him unless he specifies otherwise.
ieee symposium on security and privacy | 1986
Dorothy E. Denning; Selim G. Akl; Matthew Morgenstern; Peter G. Neumann; Roger R. Schell; Mark R. Heckman
Because views on relational database systems mathematically define arbitrary sets of stored and derived data, they have been proposed as a way of handling context- and contenbdependent classification, dynamic classification, inference, aggregation, and sanitization in multilevel database systems. This paper describes basic view concepts for a multilevelsecure relational database model that addresses the above issues. The model treats stored and derived data uniformly within the database schema. All data in the database is classified according to views called classification constraints, which specify security levels for related data. In addition, views called aggregation constraints specifies classifications for aggregates that are classified higher than the constituent elements. All data accesses are confined to a third set of views called access views, which higher than their declared filter out all data classified view level.
ACM Sigsoft Software Engineering Notes | 1992
Peter G. Neumann
Copyright 2004, Peter G. Neumann, SRI International EL243, Menlo Park CA 94025-3493 (e-mail [email protected]; http://www.CSL.sri.com/neumann; telephone 1-650-859-2375; fax 1-650-859-2844): Editor, ACM SIGSOFT Software Engineering Notes, 1976–93, Assoc.Ed., 1994–; Chairman, ACM Committee on Computers and Public Policy (CCPP); Moderator of the Risks Forum (comp.risks); cofounder with Lauren Weinstein of People For Internet Responsibility (http://www.pfir.org).
IEEE Transactions on Software Engineering | 1986
Peter G. Neumann
Considers the design of computer systems that must be trusted to satisfy simultaneously a variety of critical requirements such as human safety, fault tolerance, high availability, security, privacy, integrity, and timely responsiveness, and that must continue to do so throughout maintenance and long-term evolution. Hierarchical abstraction is shown to provide the basis for successive layers of trust with respect to the full set of critical requirements, explicitly reflecting differing degrees of criticality.
ACM Sigsoft Software Engineering Notes | 2006
Peter G. Neumann
Edited by Peter G. Neumann (Risks Forum Moderator and Chairman of the ACM Committee on Computers and Public Policy), plus personal contributions by others, as indicated. Opinions expressed are individual rather than organizational, and all of the usual disclaimers apply. We address problems relating to software, hardware, people, and other circumstances relating to computer systems. To economize on space, we include pointers to items in the online Risks Forum: (R i j) denotes RISKS vol i number j. Cited RISKS items generally identify contributors and sources, together with URLs. O cial RISKS archives are available at www.risks.org (which redirects to Newcastle and gets you nice html formatting and a search engine courtesy of Lindsay Marshall: http://catless.ncl.ac.uk/Risks/i.j.html gets you (R i j)) and at ftp://www.sri.com/risks.
Communications of The ACM | 1998
Peter G. Neumann
I nternet gambling has increased steadily, with many new online casinos operating from countries having little or no gambling regulations. Attempts to ban or regulate Internet gambling are likely to inspire more foreign establishments, including sites outside of territorial waters. Revenues from Internet gambling operations are estimated to reach
Communications of The ACM | 1994
Peter G. Neumann
8 billion by the year 2000, whereas the current total take for all U.S. casinos is
afips | 1899
Peter G. Neumann
23 billion. We consider here primarily specific risks associated with Internet gambling. Generic risks have been raised in earlier columns, such as Webware security (April 1997), cryptography (August 1997), anonymity (December 1996), and poor authentication (April 1994). For example, how would you ensure actual connection to the Internet casino of your choice? Gambling suffers from well-known risks including disadvantageous odds, uncertainty of payback, skimming by casinos, personal addiction, and ruin. Internet gambling brings further problems, including lack of positive identification and authentication of the parties, the remote use of credit cards, and out-of-jurisdiction casinos. Even if you were assured connection to your desired online casino (for example, using some form of strong cryptographic authentication), how would you know the organization is reputable? If you’re unsure, an extra gamble is taken—and technology cannot help you. Payoffs could be rigged. There could also be fraudulent collateral activities such as capture and misuse of creditcard numbers, blackmail, money laundering, and masqueraders using other people’s identities—either winning or racking up huge losses, at relatively little risk to themselves. Serious addicts who might otherwise be observed could remain undetected much longer. (On the Internet no one knows you are a gambler—except maybe for the casino, unless you gamble under many aliases.) Anonymity of gamblers is a particularly thorny issue. Taxcollecting agencies that strongly oppose anonymous gambling might lobby to require recoverable cryptographic keys. Legislation before the U.S. Congress would prohibit Internet gambling by bringing it under the Interstate Wire Act, with stiff fines and prison terms for both operators and gamblers. It would also allow law enforcement to pull the plug on illegal Internet sites. It is not clear whether such legislation would hinder off-shore operations—where casinos would be beyond legal reach, and gamblers might use encryption to mask their activities. Legalization is an alternative; the Australian state of Victoria has decided to strictly regulate and tax online gambling, hoping to drive out illegal operations. Although Internet gambling can be outlawed, it cannot be stopped. There are too many ways around prohibition, including hopping through a multitude of neutral access sites (for example, service providers), continually changing Internet addresses on the part of the casinos, anonymous remailers and traffic redirectors, encryption and steganography, and so on. Online gambling has potentially legal side-effects—generating pressure to outlaw good security. However, legally restricting good system security practices and strong cryptography would interfere with efforts to better protect our national infrastructures and with the routine conduct of legitimate Internet commerce. Thus, Internet gambling represents the tip of a giant iceberg. What happens here can have major impacts on the rest of our lives, even for those of us who do not gamble. One possibility not included in current legislation would be to make electronic gambling winnings and debts legally uncollectible. That would make it more difficult for online casinos to collect legally from customers. However, with increasingly sophisticated Internet tracking services, it might also inspire some new forms of innovative, unorthodox, life-threatening illegal collection methods on behalf of the e-casinos. It would also exacerbate the existing problem that gamblers are required to report illegal losses if they wish to offset their winnings (legal or otherwise), and would also bring into question the authenticity of computerized receipts of losses. Attempts to ban any human activity will never be 100% effective, no matter how self-destructive that behavior may be judged. In some cases, the imposition of poorly considered technological fixes for sociological problems has the potential of doing more harm than good. For example, requiring ISPs to block clandestine illegal subscriber activities is problematic. Besides, the Internet is international. Seemingly easy local answers such as outlawing or regulating Internet gambling are themselves full of risks. The Internet can be addictive, but being hooked into it is different from being hooked on it. Whereas you are already gambling with the weaknesses in our computer-communication infrastructures, Internet gambling could raise the ante considerably. Caveat aleator. (Let the gambler beware!)
Communications of The ACM | 2003
Rebecca T. Mercuri; Peter G. Neumann
convenient, but they are also dangerous. Modes of attacks include the following:. Exhaustive attacks. Attempting password attacks by random trial and error was at one time occasionally SUEC~SS ful, but is now unlikely to succeed becnuse of limits on the number of incorrect tries and auditing of failed attempts. l F.dwated guessing of passwords. At least until recently, users frequently chose as passwords dictionary words, proper names, or other character strings having a logical association with the individual (such as initials, spouses name, dogs name, or social security number). System pass words sometimes remain unchanged from the original defaulu. Maintenance passwords are often common across different systems. Such passwords have been guessed SW prisingly often. l Deriving passwords. If words or names are chose,, as pawwords, then-even if the passwords are stored in an encrypted form-they can be discovered by preencryptive dictionary attacks [2], assuming the encrypted password file can be read. Such attacks are particularly insidious, because they can be carried out surreptitiously on systems other than the ones being attacked. Algorithmically generated passwords may also be prone to attack. For example, knowing one password in a pseudorandom sequence can be used to determine the subsequent ones. l Capturing unencrypted passwords. Passwords exist in an onencrypted form as they are being typed, stored in memory , or in transit across a local or global network. They can be captured by exploiting system security flaws (or features) and by network snooping. Trojan horsing with network software (such as flp and lelner) on various host sys terns has recently enabled the capture of passwords for accounts on other systems, and the newly captured pass words have then been used to implant Trojan horses on those other systems. Someones ability to capture passwords that you may use to access other systems fro,,, your own host system may in turn compromise users on those other systems. l Creating bogus p-olds and mapdoors. Trojan horses may subvert user authentication-for example, by inserting a trapdoor into a security-critical program. The classical example is the C-compiler Trojan horse described by Ken Thompson [3] that could implant a trapdoor in the login routine. Also, if a password file is not properly pre tected against writing, a clever perpetrator may be able to edit the password tile and insert a bogus but viable self-chosen user identifier and password (encrypted as needed), or to install a variant password file. …
Journal of Cybersecurity | 2015
Hal Abelson; Ross J. Anderson; Steven Michael Bellovin; Josh Benaloh; Matt Blaze; Whitfield Diffie; John Gilmore; Matthew Green; Susan Landau; Peter G. Neumann; Ronald L. Rivest; Jeffrey I. Schiller; Bruce Schneier; Michael A. Specter; Daniel J. Weitzner
This paper considers the problem of attaining computer systems and applications programs that are both highly secure and highly reliable. It contrasts two current alternative approaches, one remedial, the other preventive. A remedial approach is outlined based on a classification of software security violations suggested by Bisbey, Carlstedt, and Hollingworth at lSI. This remedial analysis is then related to a preventive approach, illustrated here by the formal SRI Hierarchical Development Methodology. Evaluation of system security is then considered by combining concepts from the preventive and remedial approaches. This combination of techniques seems to have significant potential in the attainment and evaluation of computer system security. Illustra,;. tions are given for three types of systems, the first two being systems explicitly designed with security in mind, and the first of those being designed according to a formal methodology. The first system is the SRI design for a Provably Secure Operating System (PSOS), the second is Multics, and the third is UNIX. (The reader familiar with security may wish to skim the next two sections.)