University of Newcastle upon Tyne ![]() ![]() ![]() ![]() |
![]() |
![]() |
|||
[ About Us ] [ For Applicants ] [ For Students ] [ For Researchers ] [ For Business ] [ Internal Website ] [ Search ] | |
|
Information-flow Security for Interactive ProgramsSpeaker: Kevin O Neill 20th March 2006 , 3pm , 701 Claremont Tower AbstractInformation flow restrictions are intended to ensure that computer systems do not allow confidential information to leak to public users. Type systems that enforce information-flow security in imperative programs are well-known (Denning, 1976; Volpano, Smith, and Irvine, 1996), but are overly conservative in the sense that they rule out programs that, intuitively, never leak confidential information. It is therefore worthwhile to define semantic security conditions that capture standard intuitions of what information flow means. A variety of semantic security conditions for imperative programs have been proposed, but these conditions typically restrict information flows from confidential program variables to public program variables. Such conditions are more appropriate for reasoning about "batch-job" programs (which accept inputs, perform some computation, and produce outputs) than about interactive programs that interact with real, unknown users at runtime. Furthermore, existing semantic conditions based on noninteractive models permit insecure information flows in imperative interactive programs. In this work we formulate new information-flow conditions for a simple imperative programming language. Programs written in the language interact with users at multiple security levels via simple input and output operators. We encapsulate user behavior as "strategies" that determine which input a user provides based on the history of input and output events she has seen, and we use traces of input and output events to model the observations made by users at different security levels. The language includes operators for nondeterministic and probabilistic choice, and we give definitions of noninterference which account for the nondeterministic behavior that programs may exhibit. We illustrate the conditions with simple examples of secure and insecure programs. Finally, we describe a simple type system for the language based on previous type systems that prevent information-flow in imperative programs. The type system, though conservative, soundly enforces our definitions of noninterference. (This is joint work with Michael Clarkson and Stephen Chong.) Kevin O Neill is a Ph.D. candidate in the computer science department of Cornell University, where his research focuses on semantic definitions of secrecy and anonymity. He did his undergraduate work in computer science at the University of British Columbia in Vancouver. In his spare time he likes to sail and to play the piano, but he could use more training in both pursuits. He is currently looking for a job. |
![]() |