Scott Craver
Binghamton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Scott Craver.
IEEE Transactions on Information Forensics and Security | 2015
Enping Li; Scott Craver; Jun Yu
Supraliminal channels were introduced in 1998 as a means to achieve public key exchange in the presence of an active warden. These channels have content visible to all principals-their content is not concealed or protected by a secret key-but are highly robust to an active warden. It is assumed that this high robustness allows the transmission of key exchange datagrams. In this paper, we provide a theoretical model for supraliminal channels as channels of random data with a constraint on their distribution. We present a surprising result that in such a model, vanishingly small tampering can indefinitely derail key exchange. Unlike traditional communication theory, the specific constraints of steganographic channels prevent the use of redundancy to achieve more reliable transmission, and can even make communication more fragile to an active adversary. This result requires a vigilant adversary, however, and we propose a protocol to increase the probability of successful key exchange in the event of a pause in the wardens tampering.
acm workshop on multimedia and security | 2009
Enping Li; Scott Craver
Supraliminal channels are robust public channels based on multimedia content, allowing innocuous key exchange in the presence of an active warden. One approach to supraliminal communication embeds bits in the pseudo-random state used to produce computer-generated special effects in hybrid media such as a videoconferencing session. In this paper we describe an application for wireless phones whose audio effects betray pseudo-random state and hence potential key information for an innocuous exchange protocol. Implementation of a walkie-talkie application for the Apple iPhone shows that hundreds of bits can be reliably transmitted per effect.
information hiding | 2008
Scott Craver; Enping Li; Jun Yu; Idris M. Atakli
Unlike subliminal or steganographic channels, a supraliminal channel encodes information in the semantic content of cover data, generating innocent communication in a manner similar to mimic functions. These low-bitrate channels are robust to active wardens, and can be used with subliminal channels to achieve steganographic public key exchange. Automated generation of innocent-looking content, however, remains a difficult problem. Apples iChat, a popular instant-messaging client and the default client on the Macintosh operating system, includes a video chat facility that allows the user to apply special effects such as replacing the users background with a video file. We show how this can be used to implement a high-bitrate supraliminal channel, by embedding a computer animation engineered to communicate ciphertext by its pseudo-random behavior.
Proceedings of SPIE | 2010
Scott Craver; Jun Yu
The square root law holds that acceptable embedding rate is sublinear in the cover size, specifically O(square root of n), in order to prevent detection as the wardens data and thus detector power increases. One way to transcend this law, at least in the i.i.d.case, is to restrict the cover to a chosen subset whose distribution is close to that of altered data. Embedding is then performed on this subset; this replaces the problem of finding a small enough subset to evade detection with the problem of finding a large enough subset that possesses a desired type distribution. We show that one can find such a subset of size asymptotically proportional to n rather than the square root of n. This works in the case of both replacement and tampering: Even if the distribution of tampered data depends on the distribution of cover data, one can find a fixed point in the probability simplex such that cover data of that distribution yields stego data of the same distribution. While the transmission of a subset is not allowed, this is no impediment: wet paper codes can be used, else in the worst case a maximal desirable subset can be computed from the cover by both sender and receiver without communication of side information.
Proceedings of SPIE | 2014
Alireza Farrokh Baroughi; Scott Craver
Speaker recognition is used to identify a speakers voice from among a group of known speakers. A common method of speaker recognition is a classification based on cepstral coefficients of the speakers voice, using a Gaussian mixture model (GMM) to model each speaker. In this paper we try to fool a speaker recognition system using additive noise such that an intruder is recognized as a target user. Our attack uses a mixture selected from a target users GMM model, inverting the cepstral transformation to produce noise samples. In our 5 speaker data base, we achieve an attack success rate of 50% with a noise signal at 10dB SNR, and 95% by increasing noise power to 0dB SNR. The importance of this attack is its simplicity and flexibility: it can be employed in real time with no processing of an attackers voice, and little computation is needed at the moment of detection, allowing the attack to be performed by a small portable device. For any target user, knowing that users model or voice sample is sufficient to compute the attack signal, and it is enough that the intruder plays it while he/she is uttering to be classiffed as the victim.
acm workshop on multimedia and security | 2011
Enping Li; Scott Craver
The Square Root Law of Ker, Filler and Fridrich establishes asymptotic capacity limits for steganographic communication, caused by the watchful eye of a passive warden. We exhibit a separate fundamental limit of steganographic communication caused by a second phenomenon, the noise inflicted by an active warden. When a steganographic channel is not protected by a secret key, for example when it is used for key exchange, the number of errors needed to derail the channel grows no faster than the square root of the cover length. This means that contrary to intuition, embedding a message across a larger cover makes transmission less robust. This result is so pessimistic that it applies even to the transmission of a single datagram, a message of constant length, within a cover stream of arbitrary size. It is also true if the warden is forced by channel constraints to inflict noise randomly instead of surgically. While this law does not apply when the sender and receiver share a key in advance, ultimately this result implies that an active warden can indefinitely postpone the initial handshake of steganographic communication with a vanishingly small error rate. It also causes us to question whether the notion of a supraliminal channel is physically realizable, as even very highly robust communications channels become increasingly vulnerable for larger covers.
Proceedings of SPIE | 2011
Jun Yu; Scott Craver; Enping Li
While previous work on lens identification by chromatic aberration succeeded in distinguishing lenses of different model, the CA patterns obtained were not stable enough to support distinguishing different copies of the same lens. This paper discusses on how to eliminate two major hurdles in the way of obtaining a stable lens CA pattern. The first hurdle was overcome by using a white noise pattern as shooting target to supplant the conventional but misalignment-prone checkerboard pattern. The second hurdle was removed by the introduction of the lens focal distance, which had not received the attention it deserves. Consequently, we were able to obtain a stable enough CA pattern distinguishing different copies of the same lens. Finally, with a complete view of the lens CA pattern feature space, it is possible to fulfil lens identification among a large lens database.
Eurasip Journal on Information Security | 2007
Scott Craver; Idris M. Atakli; Jun Yu
The Break Our Watermarking System (BOWS) contest gave researchers three months to defeat an unknown watermark, given three marked images and online access to a watermark detector. The authors participated in the first phase of the contest, defeating the mark while retaining the highest average quality among attacked images. The techniques developed in this contest led to general methods for reverse-engineering a watermark algorithm via experimental images fed to its detector. The techniques exploit the tendency of watermark algorithms to admit characteristic false positives, which can be used to identify an algorithm or estimate certain parameters.
acm workshop on multimedia and security | 2009
Idris M. Atakli; Yu Chen; Qing Wu; Scott Craver
BLINK, or brief-lifetime ink, is a technology for secure document delivery and management that employs pixel-domain scrambling of raster images by a hardware device connecting a computer to its display. A BLINK decoder monitors digital video signals, identifies the presence of scrambled images, and selectively decrypts regions of the video frame containing them. The primary application is delivery of confidential documents that can only be viewed by a specific machine or user, possibly within a fixed time limit or for a fixed number of views. Removing the decryption step from the computer to the display cable confers numerous advantages, including complete protection against document forwarding, copying, pasting, screen capture, and memory snooping; furthermore it requires no particular operating system, reader software or proprietary document format. In this paper we implement several core BLINK primitives on an Alteral FPGA development board. These primitives include encrypted image identification and location, key extraction, decryption by a key stream, and bit-plane extraction for encrypted images embedded in LSB planes of other images
electronic imaging | 2015
Alireza Farrokh Baroughi; Scott Craver
Biometric detectors for speaker identification commonly employ a statistical model for a subject’s voice, such as a Gaussian Mixture Model, that combines multiple means to improve detector performance. This allows a malicious insider to amend or append a component of a subject’s statistical model so that a detector behaves normally except under a carefully engineered circumstance. This allows an attacker to force a misclassification of his or her voice only when desired, by smuggling data into a database far in advance of an attack. Note that the attack is possible if attacker has access to database even for a limited time to modify victim’s model. We exhibit such an attack on a speaker identification, in which an attacker can force a misclassification by speaking in an unusual voice, and replacing the least weighted component of victim’s model by the most weighted competent of the unusual voice of the attacker’s model. The reason attacker make his or her voice unusual during the attack is because his or her normal voice model can be in database, and by attacking with unusual voice, the attacker has the option to be recognized as himself or herself when talking normally or as the victim when talking in the unusual manner. By attaching an appropriately weighted vector to a victim’s model, we can impersonate all users in our simulations, while avoiding unwanted false rejections.