Papers
Topics
Authors
Recent
2000 character limit reached

Using fuzzy bits and neural networks to partially invert few rounds of some cryptographic hash functions (1901.02438v1)

Published 8 Jan 2019 in cs.CR, cs.LG, and cs.NE

Abstract: We consider fuzzy, or continuous, bits, which take values in [0;1] and (-1;1] instead of {0;1}, and operations on them (NOT, XOR etc.) and on their sequences (ADD), to obtain the generalization of cryptographic hash functions, CHFs, for the messages consisting of fuzzy bits, so that CHFs become smooth and non-constant functions of each bit of the message. We then train the neural networks to predict the message that has a given hash, where the loss function for the hash of predicted message and given true hash is backpropagatable. The results of the trainings for the standard CHFs - MD5, SHA1, SHA2-256, and SHA3/Keccak - with small number of (optionally weakened) rounds are presented and compared.

Citations (6)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.