- Secp256k1 - Bitcoin Wiki
- Secp256k1 - BitcoinWiki
- How does ECDSA work in Bitcoin. ECDSA (‘Elliptical Curve ...
- ECDSA Security in Bitcoin and Ethereum: a Research Survey ...
- Python Examples of ecdsa.SECP256k1 - ProgramCreek

As requested by estradata here: https://old.reddit.com/Bitcoin/comments/iylou9/what_are_some_of_the_latest_innovations_in_the/g6heez1/

It is a general issue that crops up at the extremes of cryptography, with quantum breaks being just one of the extremes of (classical) cryptography.

## Computational vs Information-Theoretic

The dichotomy is between *computationally infeasible* vs *informationally-theoretic infeasible*. Basically:

For example, suppose you want to know what 256-bit preimages map to 256-bit hashes. In theory, you*just* need to build a table with 2^{256} entries and start from 0x0000000000000000000000000000000000000000000000000000000000000000 and so on. This is computationally infeasible, but not information-theoretic infeasible.

However, suppose you want to know what preimages, of*any* size, map to 256-bit hashes. Since the preimages can be of any size, after finishing with 256-bit preimages, you have to proceed to 257-bit preimages. And so on. And there is no size limit, so you will literally *never* finish. Even if you lived forever, you would not complete it. This is information-theoretic infeasible.

## Commitments

How does this relate to confidential transactions? Basically, every confidential transaction simply hides the value behind a homomorphic commitment. What is a homomorphic commitment? Okay, let's start with commitments. A commitment is something which lets you hide something, and later reveal what you hid. Until you reveal it, even if somebody has access to the commitment, they cannot reverse it to find out what you hid. This is called the "hiding property" of commitments. However, when you **do** reveal it (or "open the commitment"), then you cannot replace what you hid with some other thing. This is called the "binding property" of commitments.

For example, a hash of a preimage is a commitment. Suppose I want to commit to something. For example, I want to show that I can predict the future using the energy of a spare galaxy I have in my pocket. I can hide that something by hashing a description of the future. Then I can give the hash to you. You still cannot learn the future, because it's just a hash, and you can't reverse the hash ("hiding"). But suppose the future event occurs. I can reveal that I did, in fact, know the future. So I give you the description, and you hash it and compare it to the hash I gave earlier. Because of preimage resistance, I cannot retroactively change what I hid in the hash, so what I gave must have been known to me at the time that I gave you the commitment i..e. hash ("binding").

## Homomorphic Commitments

A *homomorphic* commitment simply means that if I can do certain operations on preimages of the commitment scheme, there are certain operations on the commitments that would create similar ("homo") changes ("morphic") to the commitments. For example, suppose I have a magical function h() which is a homomorphic commitment scheme. It can hide very large (near 256-bit) numbers. Then if h() is homomorphic, there may be certain operations on numbers behind the h() that have homomorphisms after the h(). For example, I might have an operation <+> that is homomorphic in h() on +, or in other words, if I have two large numbers a and b, then h(a + b) = h(a) <+> h(b). + and <+> are different operations, but they are homomorphic to each other.

For example, elliptic curve scalars and points have homomorphic operations. Scalars (private keys) are "just" very large near-256-bit numbers, while points are a scalar times a standard generator point G. Elliptic curve operations exist where there is a <+> between points that is homomorphic on standard + on scalars, and a <*> between a scalar and a point that is homomorphic on standard * multiplication on scalars.

For example, suppose I have two large scalars a and b. I can use elliptic curve points as a commitment scheme: I can take a <*> G to generate a point A. It is hiding since nobody can learn what a is unless I reveal it (a and A can be used in standard ECDSA private-public key cryptography, with the scalar a as the private key and the point A as the public key, and the a cannot be derived even if somebody else knows A). Thus, it is hiding. At the same time, for a particular point A and standard generator point G, there is only one possible scalar a which when "multiplied" with G yields A. So scalars and elliptic curve points are a commitment scheme, with both hiding and binding properties.

Now, as mentioned there is a <+> operation on points that is homomorphic to the + operation on corresponding scalars. For example, suppose there are two scalars a and b. I can compute (a + b) <*> G to generate a particular point. But even if I don't know scalars a and b, but I do know*points* A = a <*> G and B = b <*> G, then I can use A <+> B to derive (a + b) <*> G (or equivalently, (a <*> G) <+> (b <*> G) == (a + b) <*> G). This makes points a homomorphic commitment scheme on scalars.

## Confidential Transactions: A Sketch

This is useful since we can easily use the near-256-bit scalars in SECP256K1 elliptic curves to easily represent values in a monetary system, and hide those values by using a homomorphic commitment scheme. We can use the hiding property to prevent people from learning the values of the money we are sending and receiving.

Now, in a proper cryptocurrency, a normal, non-coinbase transaction does not create or destroy coins: the values of the input coins are equal to the value of the output coins. We can use a*homomorphic* commitment scheme. Suppose I have a transaction that consumes an input value a and creates two output values b and c. That is, a = b + c, i.e. the sum of all inputs a equals the sum of all outputs b and c. But remember, with a homomorphic commitment scheme like elliptic curve points, there exists a <+> operation on points that is homomorphic to the ordinary school-arithmetic + addition on large numbers. So, confidential transactions can use points a <*> G as input, and points b <*> G and c <*> G as output, and we can easily prove that a <*> G = (b <*> G) <+> (c <*> G) if a = b + c, without revealing a, b, or c to anyone.

## Pedersen Commitments

Actually, we cannot just use a <*> G as a commitment scheme in practice. Remember, Bitcoin has a cap on the number of satoshis ever to be created, and it's less than 2^{53} satoshis, which is fairly trivial. I can easily compute all values of a <*> G for all values of a from 0 to 2^{53} and know which a <*> G corresponds to which actual amount a. So in confidential transactions, we cannot naively use a <*> G commitments, we need Pedersen commitments.

If you know what a "salt" is, then Pedersen commitments are fairly obvious. A "salt" is something you add to e.g. a password so that the hash of the password is much harder to attack. Humans are idiots and when asked to generate passwords, will output a password that takes less than 2^{30} possibilities, which is fairly easy to grind. So what you do is that you "salt" a password by prepending a random string to it. You then hash the random string + password, and store the random string --- the salt --- together with the hash in your database. Then when somebody logs in, you take the password, prepend the salt, hash, and check if the hash matches with the in-database hash, and you let them log in. Now, with a hash, even if somebody copies your password database, the can't get the password. They're hashed. But with a salt, even techniques like rainbow tables make a hacker's life even harder. They can't hash a possible password and check every hash in your db for something that matches. Instead, if they get a possible password, they have to prepend *each* salt, hash, then compare. That greatly increases the computational needs of a hacker, which is why salts are good.

What a Pedersen commitment is, is a point a <*> H, where a is the actual value you commit to, plus <+> another point r <*> G. H here is a second standard generator point, different from G. The r is the salt in the Pedersen commitment. It makes it so that even if you show (a <*> H) <+> (r <*> G) to somebody, they can't grind all possible values of a and try to match it with your point --- they*also* have to grind r (just as with the password-salt example above). And r is much larger, it can be a true near-256-bit number that is the range of scalars in SECP256K1, whereas a is constrained to "reasonable" numbers of satoshi, which cannot exceed 21 million Bitcoins.

Now, in order to validate a transaction with input a and outputs b and c, you only have to prove a = b + c. Suppose we are hiding those amounts using Pedersen commitments. You have an input of amount a, and you know a and r. The blockchain has an amount (a <*> H) <+> (r <*> G). In order to create the two outputs b and c, you just have to create two new r scalars such that r = r[0] + r[1]. This is trivial, you just select a new random r[0] and then compute r[1] = r - r[0], it's just basic algebra.

Then you create a transaction consuming the input (a <*> H) <+> (r <*> G) and outputs (b <*> H) <+> (r[0] <*> G) and (c <*> H) <+> (r[1] <*> G). You know that a = b + c, and r = r[0] + r[1], while fullnodes around the world, who don't know any of the amounts or scalars involved, can just take the points (a <*> H) <+> (r <*> G) and see if it equals (b <*> H) <+> (r[0] <*> G) <+> (c <*> H) <+> (r[1] <*> G). That is all that fullnodes have to validate, they just need to perform <+> operations on points and comparison on points, and from there they validate transactions, all without knowing the actual values involved.

## Computational Binding, Information-Theoretic Hiding

Like all commitments, Pedersen Commitments are binding and hiding.

However, there are really two kinds of commitments:

*what* got hidden behind the commitment.

But why?

Now, we have been using a and a <*> G as private keys and public keys in ECDSA and Schnorr. There is an operation <*> on a scalar and a point that generates another point, but we cannot "revrese" this operation. For example, even if I know A, and know that A = a <*> G, but do not know a, I cannot derive a --- there is no operation between A G that lets me know a.

Actually there is: I "just" need to have so much time, space, and energy that I just start counting a from 0 to 2^{256} and find *which* a results in A = a <*> G. This is a computational limit: I don't have a spare universe in my back pocket I can use to do all those computations.

Now, replace a with h and A with H. Remember that Pedersen commitments use a "second" standard generator point. The generator points G and H are "not really special" --- they are just random points on the curve that we selected and standardized. There is no operation H G such that I can learn h where H = h <*> G, though if I happen to have a spare universe in my back pocket I can "just" brute force it.

Suppose I*do* have a spare universe in my back pocket, and learn h = H G such that H = h <*> G. What can I do in Pedersen commitments?

Well, I have an amount a that is committed to by (a <*> H) <+> (r <*> G). But I happen to know h! Suppose I want to double my money a without involving Elon Musk. Then:

**That** is what we mean by computationally binding: if I can compute h such that H = h <*> G, then I can find another number which opens the same commitment. And of course I'd make sure that number is much larger than what I originally had in that address!

Now, the*reason* why it is "only" computationally binding is that it is information-theoretically hiding. Suppose somebody knows h, but has no money in the cryptocurrency. All they see are points. They can try to find what the original amounts are, but because any amount can be mapped to "the same" point with knowledge of h (e.g. in the above, a and 2 * a got mapped to the same point by "just" replacing the salt r with r - a * h; this can be done for 3 * a, 4 * a etc.), they cannot learn historical amounts --- the a in historical amounts could be *anything*.

The drawback, though, is that --- as seen above --- arbitrary inflation is now introduced once somebody knows h. They can multiply their money by any arbitrary factor with knowledge of h.

It is impossible to have*both* perfect hiding (i.e. historical amounts remain hidden even after a computational break) *and* perfect binding (i.e. you can't later open the commitment to a different, much larger, amount).

Pedersen commitments just happen to have perfect hiding, but only computationally-infeasible binding. This means they allow hiding historical values, but in case of anything that allows better computational power --- including*but not limited to* quantum breaks --- they allow arbitrary inflation.

## Changing The Tradeoffs with ElGamal Commitments

An ElGamal commitment is just a Pedersen commitment, but with the point r <*> G also stored in a separate section of the transaction.

This commits the r, and fixes it to a specific value. This prevents me from opening my (a <*> H) <+> (r <*> G) as ((2 * a) <*> H) <+> ((r - a * h) <*> G), because the (r - a * h) would not match the r <*> G sitting in a separate section of the transaction. This forces me to be bound to that specific value, and no amount of computation power will let me escape --- it is information-theoretically binding i.e. perfectly binding.

But that is now computationally hiding. An evil surveillor with arbitrary time and space can focus on the r <*> G sitting in a separate section of the transaction, and grind r from 0 to 2^{256} to determine what r matches that point. Then from there, they can negate r to get (-r) <*> G and add it to the (a <*> H) <+> (r <*> G) to get a <*> H, and *then* grind that to determine the value a. With massive increases in computational ability --- including *but not limited to* quantum breaks --- an evil surveillor can see all the historical amounts of confidential transactions.

## Conclusion

This is the source of the tradeoff: either you design confidential transactions so in case of a quantum break, historical transactions continue to hide their amounts, but inflation of the money is now unavoidable, OR you make the money supply sacrosanct, but you potentially sacrifice amount hiding in case of some break, including but not limited to quantum breaks.

submitted by almkglor to Bitcoin [link] [comments]
It is a general issue that crops up at the extremes of cryptography, with quantum breaks being just one of the extremes of (classical) cryptography.

- Something is
*computationally infeasible*if it could in theory be done, but you would not be able to build a practical computer to do it within the age of the universe and using only the power available in just one galaxy or thereabouts. - Something is
*informationally-theoretic infeasible*if even if you had any arbitrarily large amount of time, space, and energy, you cannot do it.

For example, suppose you want to know what 256-bit preimages map to 256-bit hashes. In theory, you

However, suppose you want to know what preimages, of

For example, a hash of a preimage is a commitment. Suppose I want to commit to something. For example, I want to show that I can predict the future using the energy of a spare galaxy I have in my pocket. I can hide that something by hashing a description of the future. Then I can give the hash to you. You still cannot learn the future, because it's just a hash, and you can't reverse the hash ("hiding"). But suppose the future event occurs. I can reveal that I did, in fact, know the future. So I give you the description, and you hash it and compare it to the hash I gave earlier. Because of preimage resistance, I cannot retroactively change what I hid in the hash, so what I gave must have been known to me at the time that I gave you the commitment i..e. hash ("binding").

For example, elliptic curve scalars and points have homomorphic operations. Scalars (private keys) are "just" very large near-256-bit numbers, while points are a scalar times a standard generator point G. Elliptic curve operations exist where there is a <+> between points that is homomorphic on standard + on scalars, and a <*> between a scalar and a point that is homomorphic on standard * multiplication on scalars.

For example, suppose I have two large scalars a and b. I can use elliptic curve points as a commitment scheme: I can take a <*> G to generate a point A. It is hiding since nobody can learn what a is unless I reveal it (a and A can be used in standard ECDSA private-public key cryptography, with the scalar a as the private key and the point A as the public key, and the a cannot be derived even if somebody else knows A). Thus, it is hiding. At the same time, for a particular point A and standard generator point G, there is only one possible scalar a which when "multiplied" with G yields A. So scalars and elliptic curve points are a commitment scheme, with both hiding and binding properties.

Now, as mentioned there is a <+> operation on points that is homomorphic to the + operation on corresponding scalars. For example, suppose there are two scalars a and b. I can compute (a + b) <*> G to generate a particular point. But even if I don't know scalars a and b, but I do know

Now, in a proper cryptocurrency, a normal, non-coinbase transaction does not create or destroy coins: the values of the input coins are equal to the value of the output coins. We can use a

If you know what a "salt" is, then Pedersen commitments are fairly obvious. A "salt" is something you add to e.g. a password so that the hash of the password is much harder to attack. Humans are idiots and when asked to generate passwords, will output a password that takes less than 2

What a Pedersen commitment is, is a point a <*> H, where a is the actual value you commit to, plus <+> another point r <*> G. H here is a second standard generator point, different from G. The r is the salt in the Pedersen commitment. It makes it so that even if you show (a <*> H) <+> (r <*> G) to somebody, they can't grind all possible values of a and try to match it with your point --- they

Now, in order to validate a transaction with input a and outputs b and c, you only have to prove a = b + c. Suppose we are hiding those amounts using Pedersen commitments. You have an input of amount a, and you know a and r. The blockchain has an amount (a <*> H) <+> (r <*> G). In order to create the two outputs b and c, you just have to create two new r scalars such that r = r[0] + r[1]. This is trivial, you just select a new random r[0] and then compute r[1] = r - r[0], it's just basic algebra.

Then you create a transaction consuming the input (a <*> H) <+> (r <*> G) and outputs (b <*> H) <+> (r[0] <*> G) and (c <*> H) <+> (r[1] <*> G). You know that a = b + c, and r = r[0] + r[1], while fullnodes around the world, who don't know any of the amounts or scalars involved, can just take the points (a <*> H) <+> (r <*> G) and see if it equals (b <*> H) <+> (r[0] <*> G) <+> (c <*> H) <+> (r[1] <*> G). That is all that fullnodes have to validate, they just need to perform <+> operations on points and comparison on points, and from there they validate transactions, all without knowing the actual values involved.

However, there are really two kinds of commitments:

- computationally binding, information-theoretic hiding
- information-theoretic binding, computationally hiding

But why?

Now, we have been using a and a <*> G as private keys and public keys in ECDSA and Schnorr. There is an operation <*> on a scalar and a point that generates another point, but we cannot "revrese" this operation. For example, even if I know A, and know that A = a <*> G, but do not know a, I cannot derive a --- there is no operation between A G that lets me know a.

Actually there is: I "just" need to have so much time, space, and energy that I just start counting a from 0 to 2

Now, replace a with h and A with H. Remember that Pedersen commitments use a "second" standard generator point. The generator points G and H are "not really special" --- they are just random points on the curve that we selected and standardized. There is no operation H G such that I can learn h where H = h <*> G, though if I happen to have a spare universe in my back pocket I can "just" brute force it.

Suppose I

Well, I have an amount a that is committed to by (a <*> H) <+> (r <*> G). But I happen to know h! Suppose I want to double my money a without involving Elon Musk. Then:

- (a <*> H) <+> (r <*> G)
- == (a <*> (h <*> G)) <+> (r <*> G)
- == ((a * h) <*> G) <+> (r <*> G); remember, <*> is
*also*homomorphic on multiplication *. - == ((a * h + a * h - a * h) <*> G) <+> (r <*> G); just add 0.
- == ((a * h + a * h) <*> G) <+> ((-a * h) <*> G) <+> (r <*> G)
- == ((2 * a * h) <*> G) <+> ((r - a * h) <*> G)
- == ((2 * a) <*> (h <*> G)) <+> ((r - a * h) <*> G)
- == ((2 * a) <*> H) <+> ((r - a * h) <*> G); TADA!! I doubled my money!

Now, the

The drawback, though, is that --- as seen above --- arbitrary inflation is now introduced once somebody knows h. They can multiply their money by any arbitrary factor with knowledge of h.

It is impossible to have

Pedersen commitments just happen to have perfect hiding, but only computationally-infeasible binding. This means they allow hiding historical values, but in case of anything that allows better computational power --- including

This commits the r, and fixes it to a specific value. This prevents me from opening my (a <*> H) <+> (r <*> G) as ((2 * a) <*> H) <+> ((r - a * h) <*> G), because the (r - a * h) would not match the r <*> G sitting in a separate section of the transaction. This forces me to be bound to that specific value, and no amount of computation power will let me escape --- it is information-theoretically binding i.e. perfectly binding.

But that is now computationally hiding. An evil surveillor with arbitrary time and space can focus on the r <*> G sitting in a separate section of the transaction, and grind r from 0 to 2

Announcing v0.4.0 releases of these RustCrypto elliptic curve crates:

The major notable new features in these releases are:

# Elliptic Curve Diffie-Hellman

Key exchange protocol which establishes a shared secret between two parties.

# Elliptic Curve Digital Signature Algorithm

Pervasively used public-key scheme for authenticating messages.

# Notes on this release

These crates contain *experimental* pure Rust implementations of scalafield arithmetic for the respective elliptic curves (secp256k1, NIST P-256). These implementations are new, unaudited, and haven't received much public scrutiny. We have explicitly labeled them as being at a "USE AT YOUR OWN RISK" level of maturity.

That said, these implementations utilize the best modern practices for this class of elliptic curves (complete projective formulas providing constant time scalar multiplication).

In particular:

*EDIT: the version in the title is incorrect. The correct version is v0.4.0, unfortunately the title cannot be edited.*

submitted by bascule to rust [link] [comments]
- k256: secp256k1 (as used by Bitcoin, Ethereum, etc)
- GitHub: https://github.com/RustCrypto/elliptic-curves/tree/mastek256
- crates.io: https://crates.io/crates/k256
- docs.rs: https://docs.rs/k256/

- p256: NIST P-256 (as used by SSL/TLS, Bluetooth, etc)
- GitHub: https://github.com/RustCrypto/elliptic-curves/tree/mastep256
- crates.io: https://crates.io/crates/p256
- docs.rs: https://docs.rs/p256/

The major notable new features in these releases are:

- k256 docs (secp256k1)
- p256 docs (NIST P-256)
- Generic implementation

That said, these implementations utilize the best modern practices for this class of elliptic curves (complete projective formulas providing constant time scalar multiplication).

In particular:

- The k256 crate now implements optimized "lazy" field arithmetic based on the bitcoin-core/secp256k1 C library optimized for both 32-bit and 64-bit architectures.
- The p256 crate includes sidechannel-resistant scalar arithmetic optimized for creating signatures on embedded platforms and presently provides 64-bit scalafield arithmetic.

## Gains x Streamr AMA Recaphttps://preview.redd.it/o74jlxia8im51.png?width=1236&format=png&auto=webp&s=93eb37a3c9ed31dc3bf31c91295c6ee32e1582beThanks to everyone in our community who attended the GAINS AMA yesterday with, Shiv Malik. We were excited to see that so many people attended and gladly overwhelmed by the amount of questions we got from you on Twitter and Telegram. We decided to do a little recap of the session for anyone who missed it, and to archive some points we haven’t previously discussed with our community. Happy reading and thanks to Alexandre and Henry for having us on their channel! What is the project about in a few simple sentences?At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that one of the things I’m most excited about are Data Unions. With Data Unions anyone can join the data economy and start monetizing the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps. Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand?The best example of a Data Union is the first one that has been built out of our stack. It's called Swash and it's a browser plugin. You can download it here: http://swashapp.io/ And basically it helps you monetize the data you already generate (day in day out) as you browse the web. It's the sort of data that Google already knows about you. But this way, with Swash, you can actually monetize it yourself. The more people that join the union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers. Very interesting. What stage is the project/product at? It's live, right?Yes. It's live. And the Data Union framework is in public beta. The Network is on course to be fully decentralized at some point next year. How much can a regular person browsing the Internet expect to make for example?So that's a great question. The answer is no one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn't available in high quality. So With a union of a few million people, everyone could be getting 20-50 dollars a year. But it'll take a few years at least to realise that growth. Of course Swash is just one data union amongst many possible others (which are now starting to get built out on our platform!) With Swash, I believe they now have 3,000 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth. I assume the data is anonymized btw?Yes. And there in fact a few privacy protecting tools Swash supplys to its users. How does Swash compare to Brave?So Brave really is about consent for people's attention and getting paid for that. They don't sell your data as such. Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT. Of course it's Streamr that is powering Swash. And we're looking at powering other DUs - say for example mobile applications. The holy grail might be having already existing apps and platforms out there, integrating DU tech into their apps so people can consent (or not) to having their data sold - and then getting a cut of that revenue when it does sell. The other thing to recognise is that the big tech companies monopolise data on a vast scale - data that we of course produce for them. That is stifling innovation. Take for example a competitor map app. To effectively compete with Google maps or Waze, they need millions of users feeding real time data into it. Without that - it's like Google maps used to be - static and a bit useless. Right, so how do you convince these big tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn't be able to monetize data as well on their end if it becomes more available through an aggregation of individuals?If a map application does manage to scale to that level then inevitably Google buys them out - that's what happened with Waze. But if you have a data union which bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly. We're currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetization. And that's what even more exciting. Just be explicit with users - do you want to sell your data? Okay, if yes, then which data point do you want to sell. Then the mobile network operator (like T-mobile for example) then organises the sale of the data of those who consent and everyone gets a cut. Streamr - in this example provides the backend to port and bundle the data, and also the token and payment rail for the payments. So for big companies (mobile operators in this case), it's less logistics, handing over the implementation to you, and simply taking a cut?It's a vision that we'll be able to talk more about more concretely in a few weeks time 😁 Compared to having to make sense of that data themselves (in the past) and selling it themselvesSort of. We provide the backened to port the data and the template smart contracts to distribute the payments. They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world. (Through our sister company TX, we also help build out the applications for them and ensure a smooth integration). The other thing to add is that the reason why this vision is working, is that the current data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to IoS14 to make third party data sharing more explicit for users. All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through this method - from overt, rich, consent from the consumer in return for a cut of the earnings. A couple of questions to get a better feel about Streamr as a whole now and where it came from. How many people are in the team? For how long have you been working on Streamr?We are around 35 people with one office in Zug, Switzerland and another one in Helsinki. But there are team members all over the globe, we’ve people in the US, Spain, the UK, Germany, Poland, Australia and Singapore. I joined Streamr back in 2017 during the ICO craze (but not for that reason!) And did you raise funds so far? If so, how did you handle them? Are you planning to do any future raises?We did an ICO back in Sept/Oct 2017 in which we raised around 30 Millions CHF. The funds give us enough runway for around five/six years to finalize our roadmap. We’ve also simultaneously opened up a sister company consultancy business, TX which helps enterprise clients implementing the Streamr stack. We've got no more plans to raise more! What is the token use case? How did you make sure it captures the value of the ecosystem you're buildingThe token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. ( we haven't talked much about the P2P network but it's our project's secret sauce). The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with Blockscience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs. But if you want to summate the Network in a sentence or two - imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it's realtime data streams. That of course means it's really well suited for the IoT economy. Well, let's continue with questions from Twitter and this one comes at the perfect time. Can Streamr Network be used to transfer data from IOT devices? Is the network bandwidth sufficient? How is it possible to monetize the received data from a huge number of IOT devices? From u/ EgorCyptoYes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed - the Streamr team just recently did extensive research to find out how well the network scales. The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally. To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we're super happy with those results. Here's a link to the paper: https://medium.com/streamrblog/streamr-network-performance-and-scalability-whitepaper-adb461edd002 While we're on the technical side, second question from Twitter: Can you be sure that valuable data is safe and not shared with service providers? Are you using any encryption methods? From u/ CryptoMatveyYes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release - our last milestone on the roadmap - when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data. If BTW - you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum). Last question from Twitter, less technical now :) In their AMA ad, they say that Streamr has three unions, Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand? From u/ alej_pacedoSo yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines. WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivized to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable. So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products. (I think they had a call this week about a use case involving cows 😂) I recall late last year, that the Streamr Data Union framework was launched into private beta, now public beta was recently released. What are the differences? Any added new features? By u/ Idee02The main difference will be that the DU 2.0 release will be more reliable and also more transparent since the sidechain we are using for micropayments is also now based on blockchain consensus (PoA). Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?by u/ AndromedaminWe're all about what's under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all! As a project how do you consider the progress of the project to fully developed (in % of progress plz) by u/ Hash2TWe're about 60% through I reckon! What tools does Streamr offer developers so that they can create their own DApps and monetize data?What is Streamr Architecture? How do the Ethereum blockchain and the Streamr network and Streamr Core applications interact? By u/ CryptoDurdenWe'll be releasing the Data UNion framework in a few weeks from now and I think DApp builders will be impressed with what they find. We all know that Blockchain has many disadvantages as well,So why did Streamr choose blockchain as a combination for its technology?What's your plan to merge Blockchain with your technologies to make it safer and more convenient for your users? By u/ noonecanstopmeSo we're not a blockchain ourselves - that's important to note. The P2P network only uses BC tech for the payments. Why on earth for example would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain. So we think we got the mix right there. What were the requirements needed for node setup ? by u/ John097Good q - we're still working on that but those specs will be out in the next release. How does the STREAMR team ensure good data is entered into the blockchain by participants? By u/ kartika84Another great Q there! From the product buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network - if that is what you also mean - is all about getting the architecture right. In a decentralised network, that's not easy as data points in streams have to arrive in the right order. It's one of the biggest challenges but we think we're solving it in a really decentralised way. What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case? By u/ JP_Morgan_ChaseThere are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace. Regarding security and legality, how does STREAMR guarantee that the data uploaded by a given user belongs to him and he can monetize and capitalize on it? By u/ kherrera22So that's a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this but it's a long one! Okay thank you all for all of those! The AMA took place in the GAINS Telegram group 10/09/20. Answers by Shiv Malik. |

Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.

At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:

- Low demand for computing resources;
- Short key lengths.

ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.

To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.

For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.

Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).

So over a lunch conversation with a nocoiner I got what i thought sounded like a far fetched conspiracy theory, went a little like this... "Mmm yeah I reckon the NSA has a backdoor code that would allow them access to any private key simply by looking at the public key" ... I'm used to the "Bitcoin will be hacked" arguement but this seems to be on a different level.

Thoughts? It's a new one for me for sure and not sure how to answer it....

submitted by hungdoge to Bitcoin [link] [comments]
Thoughts? It's a new one for me for sure and not sure how to answer it....

Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.

At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:

- Low demand for computing resources;
- Short key lengths.

ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.

To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.

For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.

Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).

This has been bothering me for a while.

I'm a newbie in computer science, and I just found out about Grover’s algorithm, which can only be implemented on a quantum computer. Supposedly it can achieve a quadratic speedup over a classical computer, brute-forcing a solution to a n-bit symmetric encryption key in 2^n/2 iterations.

This led me to think that, by utilizing a quantum computer or quantum simulator of about 40-qubits that runs Grover's algorithm, is it possible to mine bitcoins this way? The current difficulty of bitcoin mining is about 15,466,098,935,554 (approximately 2^44), which means that it would take about 2^44*2^32=2^76 SHA256 hashes before a valid block header hash is found.

However, by implementing Grover's algorithm, we would only need to sort through 2^76/2=2^38 hashes to discover a valid block header hash. A 38-qubit quantum computer should be sufficient in this case - which means the 40-qubit quantum computer should be more than enough to handle bitcoin mining.

Therefore - is it possible to use quantum computers to mine bitcoins this way? I'm not too familiar with quantum computers, so please correct me if I missed something.......

NOTE: I am NOT asking whether it is possible to use quantum computers to break the ECDSA secp256k1 algorithm, which would effectively allow anyone to steal bitcoins from wallets. I know that this would require much more than 40 qubits, and is definitely not happening in the near-future.

Rather, I'm asking about bitcoin mining, which is a much easier problem than trying to break ECDSA secp256k1.

submitted by Palpatine88888 to QuantumComputing [link] [comments]
I'm a newbie in computer science, and I just found out about Grover’s algorithm, which can only be implemented on a quantum computer. Supposedly it can achieve a quadratic speedup over a classical computer, brute-forcing a solution to a n-bit symmetric encryption key in 2^n/2 iterations.

This led me to think that, by utilizing a quantum computer or quantum simulator of about 40-qubits that runs Grover's algorithm, is it possible to mine bitcoins this way? The current difficulty of bitcoin mining is about 15,466,098,935,554 (approximately 2^44), which means that it would take about 2^44*2^32=2^76 SHA256 hashes before a valid block header hash is found.

However, by implementing Grover's algorithm, we would only need to sort through 2^76/2=2^38 hashes to discover a valid block header hash. A 38-qubit quantum computer should be sufficient in this case - which means the 40-qubit quantum computer should be more than enough to handle bitcoin mining.

Therefore - is it possible to use quantum computers to mine bitcoins this way? I'm not too familiar with quantum computers, so please correct me if I missed something.......

NOTE: I am NOT asking whether it is possible to use quantum computers to break the ECDSA secp256k1 algorithm, which would effectively allow anyone to steal bitcoins from wallets. I know that this would require much more than 40 qubits, and is definitely not happening in the near-future.

Rather, I'm asking about bitcoin mining, which is a much easier problem than trying to break ECDSA secp256k1.

Price? Who gives a shit about price when Lightning Network development is a lot more interesting?????

One thing about LN is that because there's no need for consensus before implementing things, figuring out the status of things is quite a bit more difficult than on Bitcoin. In one hand it lets larger groups of people work on improving LN faster without having to coordinate so much. On the other hand it leads to some fragmentation of the LN space, with compatibility problems occasionally coming up.

The below is just a smattering sample of LN stuff I personally find interesting. There's a bunch of other stuff, like splice and dual-funding, that I won't cover --- post is long enough as-is, and besides, some of the below aren't as well-known.

Anyway.....

# "eltoo" Decker-Russell-Osuntokun

Yeah the exciting new Lightning Network channel update protocol!

## Advantages

## Myths

## Disadvantages

# Multipart payments / AMP

Splitting up large payments into smaller parts!

## Details

## Advantages

## Disadvantages

# Payment points / scalars

Using the magic of elliptic curve homomorphism for fun and Lightning Network profits!

Basically, currently on Lightning an invoice has a payment hash, and the receiver reveals a payment preimage which, when inputted to SHA256, returns the given payment hash.

Instead of using payment hashes and preimages, just replace them with payment points and scalars. An invoice will now contain a payment point, and the receiver reveals a payment scalar (private key) which, when multiplied with the standard generator point G on secp256k1, returns the given payment point.

This is basically Scriptless Script usage on Lightning, instead of HTLCs we have Scriptless Script Pointlocked Timelocked Contracts (PTLCs).

## Advantages

## Disadvantages

# Pay-for-data

Ensuring that payers cannot access data or other digital goods without proof of having paid the provider.

In a nutshell: the payment preimage used as a proof-of-payment is the decryption key of the data. The provider gives the encrypted data, and issues an invoice. The buyer of the data then has to pay over Lightning in order to learn the decryption key, with the decryption key being the payment preimage.

## Advantages

## Disadvantages

# Stuckless payments

No more payments getting stuck somewhere in the Lightning network without knowing whether the payee will ever get paid!

(that's actually a bit overmuch claim, payments still can get stuck, but what "stuckless" really enables is that we can now safely run another*parallel* payment attempt until any one of the payment attempts get through).

Basically, by using the ability to add points together, the payer can enforce that the payee can only claim the funds if it knows two pieces of information:

*in parallel*, unlike the current situation where we must wait for an attempt to fail before trying another route. The payer only needs to ensure it generates different acknowledgment scalars for each payment attempt.

Then, if at least one of the payment attempts reaches the payee, the payee can then acquire the acknowledgment scalar from the payer. Then the payee can acquire the payment. If the payee attempts to acquire multiple acknowledgment scalars for the same payment, the payer just gives out one and then tells the payee "LOL don't try to scam me", so the payee can only acquire a single acknowledgment scalar, meaning it can only claim a payment once; it can't claim multiple parallel payments.

## Advantages

## Disadvantages

# Non-custodial escrow over Lightning

The "acknowledgment" scalar used in stuckless can be reused here.

The acknowledgment scalar is derived as an ECDH shared secret between the payer and the escrow service. On arrival of payment to the payee, the payee queries the escrow to determine if the acknowledgment point is from a scalar that the escrow can derive using ECDH with the payer, plus a hash of the contract terms of the trade (for example, to transfer some goods in exchange for Lightning payment). Once the payee gets confirmation from the escrow that the acknowledgment scalar is known by the escrow, the payee performs the trade, then asks the payer to provide the acknowledgment scalar once the trade completes.

If the payer refuses to give the acknowledgment scalar even though the payee has given over the goods to be traded, then the payee contacts the escrow again, reveals the contract terms text, and requests to be paid. If the escrow finds in favor of the payee (i.e. it determines the goods have arrived at the payer as per the contract text) then it gives the acknowledgment scalar to the payee.

## Advantages

## Disadvantages

# Payment decorrelation

Because elliptic curve points can be added (unlike hashes), for every forwarding node, we an add a "blinding" point / scalar. This prevents multiple forwarding nodes from discovering that they have been on the same payment route. This is unlike the current payment hash + preimage, where the same hash is used along the route.

In fact, the acknowledgment scalar we use in stuckless and escrow can simply be the sum of each blinding scalar used at each forwarding node.

## Advantages

## Disadvantages

submitted by almkglor to Bitcoin [link] [comments]
One thing about LN is that because there's no need for consensus before implementing things, figuring out the status of things is quite a bit more difficult than on Bitcoin. In one hand it lets larger groups of people work on improving LN faster without having to coordinate so much. On the other hand it leads to some fragmentation of the LN space, with compatibility problems occasionally coming up.

The below is just a smattering sample of LN stuff I personally find interesting. There's a bunch of other stuff, like splice and dual-funding, that I won't cover --- post is long enough as-is, and besides, some of the below aren't as well-known.

Anyway.....

- Solves "toxic waste" problem. In the current Poon-Dryja update protocol, old state ("waste") is dangerous ("toxic") because if your old state is acquired by your most hated enemy, they can use that old state to publish a stale unilateral close transaction, which your counterparty must treat as a theft attempt and punish you, causing you to lose funds. With Decker-Russell-Osuntokun old state is not revoked, but is instead gainsaid by later state: instead of actively punishing old state, it simply replaces the old state with a later state.
- Allows multiple participants in the update protocol. This can be used as the update protocol for a channel factory with 3 or more participants, for example (channels are not practical for multiple participants since the loss of any one participants makes the channel completely unuseable; it's more sensible to have a multiple-participant factory that splits up into 2-participant channels). Poon-Dryja only supports two participants. Another update protocol, Decker-Wattenhofer, also supports multiple participants, but requires much larger locktimes in case of a unilateral close (measurable in weeks, whereas Poon-Dryja and Decker-Russell-Osuntokun can be measured in hours or days).
- It uses nLockTime in a very clever way.

- No, it does not solve the "watchtower needed" problem. Decker-Russell-Osuntokun
*still*requires watchtowers if you're planning to be offline for a long time.- What might be confused is that it was
*initially*thought that watchtowers under Decker-Russell-Osuntokun could be made more efficient by having the channel participant update a single "slot" in the watchtower, rather than having to consume one "slot" per update in Poon-Dryja. However, the existence of the "poisoned blob" attack by ZmnSCPxj means that having a replaceable "slot" is risky if the other participant of the channel can spoof you. And the safest way to prevent spoofing somebody is to identify that somebody --- but now that means the watchtower can surveill the activities of somebody it has identified, losing privacy.

- What might be confused is that it was

- Requires base layer change --- SIGHASH_NOINPUT / SIGHASH_ANYPREVOUT. This is still being worked out and may potentially not reach Bitcoin anytime soon.
- Determining costs of routes is somewhat harder, and may complicate routefinding algorithms. In particular: every channel today has a "CLTV Delta", a number of blocks by which the total maximum delay of the payment is increased. This maximum delay is the maximum amount of time by which an outgoing payment can be locked, and needs to be reduced for UX purposes. Decker-Russell-Osuntokun will also add a "CSV minimum", a number of blocks, which must be smaller than the delay of an HTLC going through the channel. Current routefinding algos are good at minimizing a summed-up cost (like the "CLTV Delta") so the "CSV minimum" may require discovering / developing new routefinding algos.
- Due to the "CSV minimum" above, existing nodes that don't understand Decker-Russell-Osuntokun cannot reliably route over Decker-Russell-Osuntokun channels, as they might not impose this minimum properly.

- There are at least three variants of multipart payments: Original, Base, and High.
- Original is the original AMP proposed by Lightning Labs. It sacrifices proof-of-payment in order to allow each path to have a different payment hash. This is done by having the payer use a derivation scheme to generate each part's payment preimage from a seed, then having the split the seed (using secret sharing) to each part. The receiver can only reconstruct the seed if all parts reach it.
- Base simply uses the same payment hash for all routes. This retains proof-of-payment (i.e. an invoice is undeniably signed by the receiver, including a payment hash in the invoice; public knowledge of the payment preimage is proof that the receiver has in fact received money, and any third party can be convinced of this by being shown the signed invoice and the preimage). The receiver
*could*just take one part of the payment and then claim to be underpaid by the payer and then deny service, but claiming any one part is enough to publish the payment preimage, creating a proof-of-payment: so the receiver can provably be made liable, even if it took just one part, thus the incentive of the receiver is to only take in the payment once all parts have arrived to it. - High requires elliptic curve points / scalars. It combines both Original and Base, retaining proof-of-payment (sacrificed by Original) and ensuring cryptographically-secure waiting for all parts (rather than the mere economically-incentivized of Base). This is done by using elliptic curve homomorphism to addition of scalars to add together the payer-provided preimage (really scalar) of Original with the payee-provided preimage (really scalar) of Base.

- Better expected reliability. Channels are limited by capacity. By splitting up into many smaller payments, you can fit into more channels and be more likely to successfully reach the payee.
- Capacity on mutiple of your channels can be used to pay. Currently if you have 0.05BTC on one channel and 0.05BTC on another channel, you can't pay 0.06BTC without first rebalancing your channels (and paying fees for the rebalance first, whether the payment succeeds or not). With multipart you can now combine the capacities of multiple of your channels, and only pay fees for combining them if the payment pushes through.
- Wumbo payments (oversized payments) come "for free" without having to be explicitly supported by the nodes of the network: you just split up wumbo payments into parts smaller than the wumbo limit.

- Multipart will have higher fees. Part of the feerate of each channel is a flat-rate fee. Going through multiple paths means paying more of this flat-rate fee.
- It's not clear how to split up payments. Heuristics for payment splitting have to be derived and developed and tested.

Basically, currently on Lightning an invoice has a payment hash, and the receiver reveals a payment preimage which, when inputted to SHA256, returns the given payment hash.

Instead of using payment hashes and preimages, just replace them with payment points and scalars. An invoice will now contain a payment point, and the receiver reveals a payment scalar (private key) which, when multiplied with the standard generator point G on secp256k1, returns the given payment point.

This is basically Scriptless Script usage on Lightning, instead of HTLCs we have Scriptless Script Pointlocked Timelocked Contracts (PTLCs).

- Enables a shit-ton of improvements: payment decorrelation, stuckless payments, noncustodial escrow over Lightning (the Hodl Hodl Lightning escrow is custodial, read the fine print), High multipart.
- It's the same coolness that makes Schnorr Signatures cool. ECDSA, despite being based on elliptic curves, is not cool because the hash-the-nonce operation needed to prevent it from infringing Schnorr's
patent also prevents ECDSA from using the cool elliptic curve homomorphism of addition over scalars.*fatherfucking*

- Requires Schnorr on Bitcoin layer.
- Actually, we can work with 2p-ECDSA without waiting for Schnorr. We get back the nice elliptic curve homomorphism by passing the ECDSA nonce through another cryptosystem, Paillier. This gets us the ability to do Scriptless Script. I think it has only 80-bits security because of going through Paillier though.
- Basically the conundrum is: we could implement 2p-ECDSA now, hope we never have to test the 80-bit security anytime soon, then switch to Schnorr with 128-bit security later (which means reimplementing a bunch of things, because the calculations are different and the data that needs to be exchanged between channel participants is
different between the 2p-ECDSA and Schnorr). Reimplementing is painful and is more dev work. If we don't implement with 2p-ECDSA now, though, we will be delaying all the nice elliptic curve goodness (stuckless, noncustodial escrow, payment decorrelation) until Bitcoin gets Schnorr.*very*

- Basically the conundrum is: we could implement 2p-ECDSA now, hope we never have to test the 80-bit security anytime soon, then switch to Schnorr with 128-bit security later (which means reimplementing a bunch of things, because the calculations are different and the data that needs to be exchanged between channel participants is
- Elliptic curve discrete log problem is theoretically quantum-vulnerable. If we can't find a qunatum-resistant homomorphic construction, we'll have to give up the advantages (payment decorrelation, stuckless payments, noncustodial escrow over Lightning) we got from using elliptic curve points and go back to boring old hashes.

In a nutshell: the payment preimage used as a proof-of-payment is the decryption key of the data. The provider gives the encrypted data, and issues an invoice. The buyer of the data then has to pay over Lightning in order to learn the decryption key, with the decryption key being the payment preimage.

- Enables data providers to sell data. This could be sensors, livestreams, blogs, articles, whatever.

- There's no scheme to determine if the data provider is providing actually-useful data. The data-provider could just stream https://random.org for example. This is a potentially-impossible problem. Even if the data-provider provides a "sample" of the data, and is able to derive some proof that the sample is indeed a true snippet of the encrypted data, the
*rest*of the data outside of the sample might just be random junk.

(that's actually a bit overmuch claim, payments still can get stuck, but what "stuckless" really enables is that we can now safely run another

Basically, by using the ability to add points together, the payer can enforce that the payee can only claim the funds if it knows two pieces of information:

- The payment scalar corresponding to the payment point in the invoice signed by the payee.
- An "acknowledgment" scalar provided by the payer to the payee via another communication path.

Then, if at least one of the payment attempts reaches the payee, the payee can then acquire the acknowledgment scalar from the payer. Then the payee can acquire the payment. If the payee attempts to acquire multiple acknowledgment scalars for the same payment, the payer just gives out one and then tells the payee "LOL don't try to scam me", so the payee can only acquire a single acknowledgment scalar, meaning it can only claim a payment once; it can't claim multiple parallel payments.

- Can safely run multiple parallel payment attempts as long as you have the funds to do so.

- Needs payment point + scalar

The acknowledgment scalar is derived as an ECDH shared secret between the payer and the escrow service. On arrival of payment to the payee, the payee queries the escrow to determine if the acknowledgment point is from a scalar that the escrow can derive using ECDH with the payer, plus a hash of the contract terms of the trade (for example, to transfer some goods in exchange for Lightning payment). Once the payee gets confirmation from the escrow that the acknowledgment scalar is known by the escrow, the payee performs the trade, then asks the payer to provide the acknowledgment scalar once the trade completes.

If the payer refuses to give the acknowledgment scalar even though the payee has given over the goods to be traded, then the payee contacts the escrow again, reveals the contract terms text, and requests to be paid. If the escrow finds in favor of the payee (i.e. it determines the goods have arrived at the payer as per the contract text) then it gives the acknowledgment scalar to the payee.

- True non-custodial escrow: the escrow service never holds any funds.

- Needs payment point + scalar.

In fact, the acknowledgment scalar we use in stuckless and escrow can simply be the sum of each blinding scalar used at each forwarding node.

- Privacy! Multiple forwarding nodes cannot coordinate to try to uncover the payer and payee of each payment.

- Needs payment point + scalar.

Price and Libra posts are shit boring, so let's focus on a technical topic for a change.

Let me start by presenting a few of the upcoming Bitcoin consensus changes.

(as these are*consensus* changes and not P2P changes it does not include erlay or dandelion)

Let's hope the community strongly supports these upcoming updates!

# Schnorr

The sexy new signing algo.

## Advantages

## Disadvantages

# MuSig

A provably-secure way for a group of n participants to form an aggregate pubkey and signature. Creating their group pubkey does not require their coordination other than getting individual pubkeys from each participant, but creating their signature does require all participants to be online near-simultaneously.

## Advantages

## Disadvantages

# Taproot

Hiding a Bitcoin SCRIPT inside a pubkey, letting you sign with the pubkey without revealing the SCRIPT, or reveal the SCRIPT without signing with the pubkey.

## Advantages

## Disadvantages

# MAST

Encode each possible branch of a Bitcoin contract separately, and only require revelation of the exact branch taken, without revealing any of the other branches. One of the Taproot script versions will be used to denote a MAST construction. If the contract has only one branch then MAST does not add more overhead.

## Advantages

## Disadvantages

submitted by almkglor to Bitcoin [link] [comments]
Let me start by presenting a few of the upcoming Bitcoin consensus changes.

(as these are

Let's hope the community strongly supports these upcoming updates!

- We have a simpler proof of the security of Schnorr than the current ECDSA: a general heuristic is that a simpler proof is better since simpler proofs have less complexity for vulnerabilities to hide in. In practice most cryptographers would consider these roughly equivalent in security.
- Linear signatures. This lets you do some operations on signatures which include making it possible for a n-of-n signing group to construct a single pubkey and signature, as well as providing secret communications channels (i.e. you provide the difference between two scalars privately, then create a signature using one scalar and publish it, which reveals the other scalar, letting you communicate this scalar while providing a signature that validates a transaction).
- As a completely new signing scheme we can optimize signatures and public keys a little more than the existing ECDSA Bitcoin signatures, to help reduce resource usage. For instance an SECP256K1 point requires 257 bits to store, which is typically stored as one byte for the "extra" 1 bit and 32 bytes as the remaining 256 bits, but this extra bit is really the "sign" of the point (positive or negative) and we can enforce certain restrictions like "always use positive points", and a scalar which produces a negative point can be "negated" to produce a positive point, letting us cut out one entire byte from precious onchain space.

- The Schnorr patent strongly discouraged development of Schnorr signatures. For this reason there are still details that hadn't been hammered out. The bip-schnorr proposal by Pieter hammers down some details, but there are still some concerns about multisignature and more complex usages below that are still being investigated.

- Provably-secure. We already knew from Schnorr's work that Schnorr signatures allow multiparticipant signing, but his original proposal was actually insecure (this is part of the disadvantage caused by Schnorr patenting the signature scheme, nobody bothered to correct his multiparticipant signing procedure because why give free work for him?).
- We can create a group pubkey without telling the group we made such; we only need to get their individual pubkeys. This can be useful in some protocols, e.g. escrow protocols where we elect a group of n-of-n participants as a possible escrow signer; we create this group pubkey from the published pubkeys of the escrow services, but only reveal to them that this group pubkey involves them later in case of dispute (signing requires everyone's cooperation); if the trade has no dispute at all then the escrow group never needs to learn that the group pubkey included them or that the trade was potentially an escrow trade.
- Creates just a single signature and pubkey, greatly reducing the space needed onchain for n-of-n groups.
- No actual change in consensus needed, other than supporting Schnorr signatures as a consensus signing scheme.

- Only n-of-n; m-of-n requires verifiable secret sharing in addition to MuSig. In particular, for m-of-n we require that the participants also cooperate while generating the group pubkey (unlike the n-of-n case where we can just get published pubkeys, the m-of-n case requires that we perform some cooperative calculation to generate the private key shares for each participant).
- Unlike separate-signatures-and-pubkeys multisig (i.e. what current OP_CHECKMULTISIG does), participants cannot simply send a signature it generates by itself and then go offline in no specific order. Instead, participants have to cooperatively generate a temporary signing nonce and then generate the signature. This is what forces all participants to be online at the time of generating the signature. This can be mitigated somewhat since you can pass around partial signatures, so once you have gotten the agreed-upon nonce and then created your partial signature, you can then go offline. This might not be a particularly big disadvantage but existing protocols might require an extra message turnaround in order to handle the multiple-rounds nature of MuSig.

- You can show a SCRIPT and ignore the pubkey, or sign with the pubkey and ignore (and never reveal) the SCRIPT. This can be simulated somewhat with current Bitcoin by using a separate transaction that pays from a pubkey (or m-of-n or n-of-n multisig) to a SCRIPT, which you only publish if you want to take the SCRIPT path, but Taproot optimizes this by letting you dispense with that separate transaction. Some protocols that want to have some privacy (CoinSwap in particular) will need to have some way to hide the SCRIPT path and just use a pubkey (or m-of-n or n-of-n) in the "best case", and Taproot allows the "worst case" SCRIPT path to be somewhat more optimized if we need to take that branch.
- The exact proposed mechanism in bip-taproot by Pieter allows another version number to be embedded. So not only do we have current 16 available SegWit versions (v0 already in use, v1 is intended to be taken for Taproot, v2->15 are for future expansion) we also extend SegWit v1 to have 256 "script versions" too, only one of which will be used for MAST (see below). A new "script version" can completely drop the current stack-based SCRIPT language and replace it with a completely new language, for example.
- As a new SegWit version we can change the rules of the SCRIPT language to clean up some infelicities of the existing SCRIPT. For example, instead of OP_NOP operations we have OP_SUCCESS operations in the Taproot SCRIPT. When a softfork changes an OP_NOP to a different opcode, it can only either fail the SCRIPT or do nothing to the stack. When a softfork changes an OP_SUCCESS to a different opcode, it can do anything, including put new items on the stack, rearrange the stack, and so on.

- It uses the pay-to-contract construction, which is used to allow a UTXO to commit to a message (in Taproot's case, the SCRIPT) without spending more space other than the pubkey it pays to. However, other schemes might want to use pay-to-contract (because of the space savings of the ability to embed a message commitment without adding more space beyond the pubkey), so care must be taken to ensure that such schemes using pay-to-contract do not conflict with Taproot itself.
- Having a "SCRIPT only" UTXO (i.e. one which
**cannot**be spent using a simple signature, but requires some more complex SCRIPT) requires that we compute a "nothing up my sleeves" (NUMS) point, i.e. a pubkey which we generate in such a way that we, or anyone, cannot possibly learn the corresponding privkey. This is already doable but requires that we actually use NUMS if we want a UTXO that can only be spent via a particular SCRIPT.

- Privacy; branches not taken are not revealed, potentially hiding the possible participation of some entity with known pubkey if that entity ends up not signing for that branch.
- Can be used to emulate m-of-n while using only n-of-n MuSigs (remember, n-of-n MuSig can be set up by knowing only the pubkeys of all participants, but m-of-n requires that the participants split up an n-of-n MuSig key into m shares, and each participant has to remember its own share (which can be difficult for hardware wallets to safely do)). To emulate m-of-n, you just get every subgroup of m participants, create an m-of-m MuSig pubkey for each subgroup, then make multiple
OP_CHECKSIG scripts, each of which you treat as a "separate branch" in the MAST (you probably want to use a NUMS point as the Taproot pubkey that hides the MAST scripts, or select which sub-group of m is the most likely to be online later and put that as the Taproot pubkey). You need to have m participants online at signing time. This has the side effect of not revealing participants who didn't sign.

- Requires O(log n) data to be revealed for n branches. This mildly leaks some information: if you see q data to prove the MAST, then the number of branches is between 2
^{q-1}and 2^{q}. This can be twisted around to make unbalanced MAST trees, but unbalanced MAST trees imply that some branches are more likely than others (you'd put the more likely branches in the leaves that are nearer to the root, so fewer data revealed == more likely), which again can be a mild information leak. Might not be particularly bad information leak in practice, but for example Graftroot (which is not yet proposed) achieves O(1) data revelation for n branches, leaking no data at all on the number of other branches and/or the probability of the revealed branch.

Hello,

I'm trying to generate a public key from a private one using the bitcoin library secp256k1 https://github.com/bitcoin-core/secp256k1 in C++:

Thank you

submitted by chizisch to Bitcoin [link] [comments]
I'm trying to generate a public key from a private one using the bitcoin library secp256k1 https://github.com/bitcoin-core/secp256k1 in C++:

// private key std::string privateKey = "baca891f5f0285e043496843d82341d15533f016c223d114e1e4dfd39e60ecb0"; const char* cPrivateKey = privateKey.c_str(); // creating public key from it secp256k1_context *signingContext = secp256k1_context_create(SECP256K1_CONTEXT_SIGN); secp256k1_pubkey pkey; unsigned char *seckey = (unsigned char*) malloc(privateKey.size() * sizeof(unsigned char)); std::copy(cPrivateKey, cPrivateKey + privateKey.size(), seckey); if (secp256k1_ec_pubkey_create(signingContext, &pkey, seckey) == 0) throw "Creation error"; // print the result in hex std::stringstream ss; for (int i = 0; i < 64; ++i) { ss << std::setw(2) << std::hex << (0xff & (unsigned int)pkey.data[i]); } std::cout << ss.str() << std::endl; // parsing the result std::string pkey_data = ss.str(); secp256k1_context *noneContext = secp256k1_context_create(SECP256K1_CONTEXT_NONE); secp256k1_pubkey pubkey; if (secp256k1_ec_pubkey_parse(noneContext, &pubkey, pkey.data, 64) == 0) std::cout << "Couldnt parse using pkey.data" << std::endl; if (secp256k1_ec_pubkey_parse(noneContext, &pubkey, pkay_data, pkey_data.size()) == 0) std::cout << "Couldnt parse using hex public key " << std::endl;The output:

1b9e55408c5141414e8337adef57ead18f62444fdf5f8c897d0dc812696a6141a919254d3e750075a2a9ba32dc4ed30c84e65f27e431b59b94a2aafe3e80a974 Couldnt parse using pkey.data Couldnt parse using hex public keyThe idea is to parse the public key from the generated one to see if it is really correct. Also, i tried using openssl with the same private key and i get a different public key from the private one using ecdsa secp256k1.. Any help or example on how to use the library in C++ is more than welcome.

Thank you

submitted by eragmus to Bitcoin [link] [comments] |

submitted by ennisaorg to conspiracy [link] [comments]

Following the implementation of Segwit, it's time to talk about Schnorr signatures and how they are going to aid in scaling Bitcoin. Segwit works by altering the composition of Bitcoin blocks. It moves the signature data to another part of the block, hence the name "Segregated Witness". Now that the signature data has been reorganized, Schnorr signatures can be applied to them, where they are signed at once rather than individually. According to coindesk, "Under the ECDSA scheme, each piece of a bitcoin transaction is signed individually, while with Schnorr signatures, all of this data can be signed once"[3].

*What are Schnorr Signatures?*

*How is this going to change/improve Bitcoin?*

*How are these changes going to implemented?*

*How is the work coming along?*

**The meat and potatoes of the matter**

*The verification equation:*

*The signature equation:*

"And aggregation shrinks the transaction sizes by the amounts of inputs they have. In our paper, we went and ran numbers on the shrinkage based on existing transaction patterns, there was a 28% reduction based on the existing history"[1].

This is VERY SIGNIFICANT. The "aggregation shrinks the transaction sizes by the amounts of inputs they have"[1]. This means the "more inputs you have, the more you gain because you add a shared cost of paying for one signature"[1].

"Validation time regardless of how complex your script is. You just prove that the script existed, it is valid, and the hash. But anything that further complicates the structure of transaction. This helps fungibility, privacy, and bandwidth"[1].

*What can you do to help?*

**Sources**

[1] http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/

[2] https://github.com/bitcoin-core/secp256k1

[3] https://www.coindesk.com/just-segwit-bitcoin-core-already-working-new-scaling-upgrade/

Edit: Finally got formatting to work.

submitted by RussianHacker1011101 to Bitcoin [link] [comments]
- Based off intractability of discrete logarithm problems.
- Simplest provably secure signature scheme in a "random oracle" model.
- Generates short signatures.
- Does not need to be collision resistant (ie. good for data compression).

- Smaller transactions
- more transactions per block
- faster transaction times
- Increase privacy[3]
- Curb spam attacks[3]
- Equal validation times regardless of script complexity
- realizing atomic swapping
- laying the groundwork for the Lightning Network
- Smaller block chain relative to number of transactions

- Soft fork
- Possible creation of a new OP Code (over 80 OP codes remain if I am correct)

- On the github page, where the majority of these changes are taking place[2]:
- 47 open issues and 79 resolved issues
- Last commit was 12 days ago (indicative of a healthy repository)
- over 800 commits
- 48 contributers
- According to the developers the Schnorr signatures are low handing fruit of improving scaliabilty

sG = R - H(R || P1 || C || m)P1 - H(R || P2 || C || m)P2

R = H(R || P1 || C || m)P1 + … + s*GSummations are fundamentally faster to compute on computers than multiplication. This is because each multiplication operation is the sum of the terms where the n is the number of terms. So 5 * 7 = 5 + 5 + 5 + 5 + 5 + 5 + 5 (or vice versa for 7). computers only have the capacity to perform two operations - addition and bitwise shifts. Algorithms that maximize the use of these operations are comparitively faster than algorithms that use multiplication, division, or modulus even when the time complexity of the two problems are equal. This is why "For 100000 keys, the speedup is approximately 8x"[1].

"And aggregation shrinks the transaction sizes by the amounts of inputs they have. In our paper, we went and ran numbers on the shrinkage based on existing transaction patterns, there was a 28% reduction based on the existing history"[1].

This is VERY SIGNIFICANT. The "aggregation shrinks the transaction sizes by the amounts of inputs they have"[1]. This means the "more inputs you have, the more you gain because you add a shared cost of paying for one signature"[1].

"Validation time regardless of how complex your script is. You just prove that the script existed, it is valid, and the hash. But anything that further complicates the structure of transaction. This helps fungibility, privacy, and bandwidth"[1].

- Run a Segwit node.
- Donate to the developers contributing the the improved protocol so they can spend as much time working on it as possible.

[1] http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/

[2] https://github.com/bitcoin-core/secp256k1

[3] https://www.coindesk.com/just-segwit-bitcoin-core-already-working-new-scaling-upgrade/

Edit: Finally got formatting to work.

Schnorr signatures have been praised by Bitcoin developers for a while Adam Back admitted it was more secure

https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641

And it is much faster (scalable for verifying hundred thousands of transactions per second)

https://bitcointalk.org/index.php?topic=103172.0

DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable.

Gavin Andresen, the former Bitcoin Chief Scientist want to support it in Bitcoin

https://www.reddit.com/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfp3xj/

Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212

However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist

Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html

LISK is IoT friendly

The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic.

Advantages of Schnorr signatures

According to David Harding, Schnorr signatures can bring many benefits

Smaller multisig transactions

Slightly smaller for all transactions

Plausible deniability for multisig

Plausible deniability of authorized parties using a third-party organizer (which doesn't need to be trusted with private keys), it's possible to prevent signers from knowing whether their private key is part of the set of signing keys.

Theoretical better security properties: Also, the ed25519 page linked above describes several ways it is resistant to side-channel attacks, which can allow hardware wallets to operate safely in less secure environments.

Faster signature verification: it likely takes fewer CPU cycles to verify an ed25519 Schnorr signature than a secp256k1 ECDSA signature.

Multi-crypto multisig: with two (slightly) different cryptosystems to choose from, high-security users can create 2-of-2 multisig pubkey scripts that require both ECDSA and Schnorr signatures, so their bitcoins can't be stolen if only one cryptosystem is broken.

https://bitcoin.stackexchange.com/questions/34288/what-are-the-implications-of-schnorr-signatures

Scalable multisig transactions

The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.) Infamous malleability is non-issue in LISK Provably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/

Bitcoin has malleability bugs

submitted by Corinne1992 to CryptoCurrency [link] [comments]
https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641

And it is much faster (scalable for verifying hundred thousands of transactions per second)

https://bitcointalk.org/index.php?topic=103172.0

DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable.

Gavin Andresen, the former Bitcoin Chief Scientist want to support it in Bitcoin

https://www.reddit.com/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfp3xj/

Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212

However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist

Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html

LISK is IoT friendly

The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic.

Advantages of Schnorr signatures

According to David Harding, Schnorr signatures can bring many benefits

Smaller multisig transactions

Slightly smaller for all transactions

Plausible deniability for multisig

Plausible deniability of authorized parties using a third-party organizer (which doesn't need to be trusted with private keys), it's possible to prevent signers from knowing whether their private key is part of the set of signing keys.

Theoretical better security properties: Also, the ed25519 page linked above describes several ways it is resistant to side-channel attacks, which can allow hardware wallets to operate safely in less secure environments.

Faster signature verification: it likely takes fewer CPU cycles to verify an ed25519 Schnorr signature than a secp256k1 ECDSA signature.

Multi-crypto multisig: with two (slightly) different cryptosystems to choose from, high-security users can create 2-of-2 multisig pubkey scripts that require both ECDSA and Schnorr signatures, so their bitcoins can't be stolen if only one cryptosystem is broken.

https://bitcoin.stackexchange.com/questions/34288/what-are-the-implications-of-schnorr-signatures

Scalable multisig transactions

The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.) Infamous malleability is non-issue in LISK Provably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/

Bitcoin has malleability bugs

Schnorr algorithm has long been at the top of the wish list for many Bitcoin developers.And indeed, it has been a long time... Are they top priority for Bitcoin core? I do not know, but they seem to be pretty high up on the priority list. Here is a quick timeline:

- Hal Finney talks about speeding up signature verification by implementing “batch signature verification”. This does not refer to Schnorr, but it is the starting point. February 2011.
- Mike Hearn elaborates on “batch signature verification”. He mentions a paper by the famous cryptograph D. Bernstein which successfully implemented such batch verification by using the twisted Edwards curve Ed25519 which relies on Schnorr signatures. August 2012.
- An anonymous user released a white paper proposing Boneh–Lynn–Shacham in order to implement signature aggregation. September 2013.
- Adam Back talks about his preference for Schnorr signatures over ECDSA due to the possible signature aggregation. October 2013.
- Gregory Maxwell and Adam Back talk about Schnorr signatures natively supporting multisig. March 2014.
- Gavin Andresen mentions it on his wish list in October 2014.
- Here is pretty good summary on Schnorr signatures advantages by David Harding. January 2015.
- Gregory Maxwell mentions it (quite negatively I might add) during his talk at the SF Bitcoin Devs Seminar in
**April 2015**. Once again the reference is related to multisig and signature aggregation (from minute mark 20 to 40 ish). - Pieter Wuille and Gregory Maxwell wrote a Schnorr API which was committed on July/August 2015. The latest change date from December 2015 and regards documentation.
- Gregory Maxwell referenced the post#3 of this list as a starting point to justify the implementation of Schnorr signatures. His justifications are towards batch verification and signature aggregation. February 2016.
- Pieter Wuille talks at Scaling Bitcoin 2016 Milan about Schnorr signatures, the history, the advantages and the problems they face. October 2016 (from minute mark 38 to 1H05 ish).
- Bitcoin Core technology roadmap announcing an upcoming whitepaper on Schnorr signature but also a BIP which would be announced by the end of 2017. March 2017.
- Pieter Wuille says that there will be a concrete proposal and implementation in 2018. November 2017.

Schnorr signatures have been praised by Bitcoin developers for a while

Adam Back admitted it was**more secure**

https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641

And it is*much faster* (scalable for verifying **hundred thousands of transactions per second**)

https://bitcointalk.org/index.php?topic=103172.0

Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212

However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist

Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html

**LISK is IoT friendly**

**Advantages of Schnorr signatures**

According to David Harding, Schnorr signatures can bring many benefits

**Scalable multisig transactions**

**Infamous malleability is non-issue in LISK**

submitted by pcdinh to Lisk [link] [comments]
Adam Back admitted it was

https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641

And it is

https://bitcointalk.org/index.php?topic=103172.0

Gavin Andresen, the former Bitcoin Chief Scientist want to support it in Bitcoin https://www.reddit.com/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfp3xj/DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable.

Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212

However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist

Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html

The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic.

According to David Harding, Schnorr signatures can bring many benefits

- Smaller multisig transactions
- Slightly smaller for all transactions
- Plausible deniability for multisig
- Plausible deniability of authorized parties using a third-party organizer (which doesn't need to be trusted with private keys), it's possible to prevent signers from knowing whether their private key is part of the set of signing keys.
- Theoretical better security properties: Also, the ed25519 page linked above describes several ways it is resistant to side-channel attacks, which can allow hardware wallets to operate safely in less secure environments.
- Faster signature verification: it likely takes fewer CPU cycles to verify an ed25519 Schnorr signature than a secp256k1 ECDSA signature.
- Multi-crypto multisig: with two (slightly) different cryptosystems to choose from, high-security users can create 2-of-2 multisig pubkey scripts that require both ECDSA and Schnorr signatures, so their bitcoins can't be stolen if only one cryptosystem is broken.

The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.)

Bitcoin has malleability bugsProvably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/

submitted by FamiliarSleep to CryptoMangust [link] [comments] https://preview.redd.it/vvnngd5ijd021.png?width=720&format=png&auto=webp&s=c684476c75b93de4b72c0657f2e432acbdbb1518 Blockchain technology, which can simplify peer-to-peer transactions without the need for a third-party intermediary, can disrupt many existing centralized systems and applications, especially in the banking and financial sectors. Combined with powerful intellectual contracts, blockchain technology has the potential to change the dynamics of the economy around the world. A truly decentralized ecosystem takes control of several self-service organizations and returns them back to the masses. Blockchain technology is a truly innovative innovation in how to create, share and edit data. But despite this, blockchain technology currently faces critical performance issues. For this reason, Hetachain proposed to create the latest Blockchain network architecture, which will be used very easily, more flexibly for users and developers using the blockchain platform, which has very high performance. A cryptographic algorithm using ECDSA (digital elliptic motion algorithm) with a secp256k1 curve for public-private cryptography. The private key is 256-bit random data. https://heta.org/ The HETA address associated with this secret key is the latest 160 bit SHA3-256 (Keccak) hash from key state adoption issues However, the speed of making block decisions for solving real problems and replacing existing systems was slow. The root cause may be related to the scalability issue that affects bitcoin, ethereum, and other locks. Both bitcoin and Essential Blokhin were faced with network overload problems at the end of last year. Slow transactions and excessively high fees threatened to defeat the purpose for which the technology was invented. Even decentralized applications (DAPPS), built on existing blockchains, are usually tied to performance problems without giving users the convenience and ease of use they are used to. There is a need for a blockchain, which is designed to provide the benefits of technology, and also scalable enough to handle the massive traffic of existing systems. Hetachain: solution Hetachain intends to solve the above problems and create a high-performance blockchain that scales, facilitates easy, smart contract creation and is easy to use by end users. This is achieved through a hybrid consensus mechanism based on delegated share proof (DPoS) and Byzantine resiliency (BFT). The goal is to create a block chain that is scalable, has a shorter block time and high bandwidth. Using the Hetachain algorithm, a block is created every 1 second and checked using a single maze in the network. |

Question for the Monero devs, or somebody who already went deeply into Cryptonote technical aspects.

As far as I know, the reasons for which a bitcoin transaction can be malleated is due to either a different implementation of the ECDSA signature scheme (with trailing zeros or not), and the (relatively complex) scripting language used to redeem outputs, were essentially in some cases the same logic can be obtained with a different script content (thus changing the tx hash).

Cryptonote is using a different signature scheme, and does not use the scripting language at all. I believe it makes it less prone to transaction malleability than bitcoin. My question is: is it actually completely imune to it or not?

submitted by binaryFate to Monero [link] [comments]
As far as I know, the reasons for which a bitcoin transaction can be malleated is due to either a different implementation of the ECDSA signature scheme (with trailing zeros or not), and the (relatively complex) scripting language used to redeem outputs, were essentially in some cases the same logic can be obtained with a different script content (thus changing the tx hash).

Cryptonote is using a different signature scheme, and does not use the scripting language at all. I believe it makes it less prone to transaction malleability than bitcoin. My question is: is it actually completely imune to it or not?

Hi all, I have a question about the Encrypt/Decrypt Message tool in Electron Cash submitted by aaron67_cc to electroncash [link] [comments] I knew that I can use a public key to encrypt some plain text and decrypt with the corresponding private key For example, Public key: 03c8803b937bf4970fdcec5f43295c3adf4731aaa9287bd58612089480c096e8bd message: hello-world-123 With the tool, I got the encrtyped text QklFMQKWrqTZYtpihiXAC/0HcPgZMlqoWxy3aREvX+jwKyAmMy93Y4uwz0bIDgQAu9A8RTv2SRy45mQOM5ryl7HiMr1NjVNNGAdKsv0+sMAq3IqV2g== https://preview.redd.it/d19n3ff9yu421.png?width=1220&format=png&auto=webp&s=94e54f6f13a94c9fd333b87e7d8ef3c282b06ed0 When I click the "Encrypt" button again, I got a brand new encrypted text QklFMQMA0EA3pYj7FD1+TPh0qqUb6QmO6fr+qs58jvrBWND9hCA7EzQr68TpgBnLrq2Oj0n00YSYOdFnMxa52pve3JKDtsxEuLTL1+/ck2MDNcFp/g== https://preview.redd.it/qyveey4qyu421.png?width=1220&format=png&auto=webp&s=e651d23357c1468576a962be46828517e65afc0a This happens again and again, and I can decrypt all these different encrypted texts... I knew that, Bitcoin and Bitcoin Cash - use ECC, to be more specific, the secp256k1 curve, to generate public and private key pairs - use ECDSA, aka Elliptic Curve Digital Signature Algorithm, to sign and verify messages - did some google, the algorithm used in encrypt/decrypt text, has a name ECIES Am I right? For the encrypt and decrypt part, why the encrypted text changes every time...? What on earth happens inside this operation? I also found a GitHub repository about the ECIES, it's from bitpay https://github.com/bitpay/bitcore-ecies But I am quite confused with its example https://bitcore.io/api/ecies/ https://preview.redd.it/s3eobfp20v421.png?width=1514&format=png&auto=webp&s=3988dfdf00bae529e49aca36cd09adb22215326f When we encrypt data, we should not need anyone's private keys, right? If I wanna implement the encrypt/decrypt part, is there any libraries that I can use? Any recommendations? Sorry to bother, but quite desire to know the truth. Does anyone has an knowledge on this...? Thanks very much for helping and teaching me |

Reading through the dev guide.

I'm on transactions. https://bitcoin.org/en/developer-guide#transactions

It says:

So is the signature derived just from the private key?

Or is it the private key + the pubkey script?

submitted by ecurrencyhodler to Bitcoin [link] [comments]
I'm on transactions. https://bitcoin.org/en/developer-guide#transactions

It says:

"An secp256k1 signature made by using the ECDSA cryptographic formula to combine certain transaction data (described below) with Bob’s private key. This lets the pubkey script verify that Bob owns the private key which created the public key."But the diagram below it suggests that the signature is derived from the private key + the Pubkey Script.

So is the signature derived just from the private key?

Or is it the private key + the pubkey script?

Hi there,

I am struggling with Pentesterlab challenge: https://pentesterlab.com/exercises/ecdsa

I'm wondering who can give some lights on how to resolve some steps in this challenge. You can read about similar challenge there - https://ropnroll.co.uk/2017/05/breaking-ecdsa/

I suppose I have problems with extracting (r,s) from ESDCA (SECP256k1) signature (here details - https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm)

I even try to brute-force all possible (r,s) values but no luck. Every time I receive error 500.

Its very interesting challenge and I want to break ECDSA finally.

Thanks in advance

submitted by unk1nd0n3 to webappsec [link] [comments]

I am struggling with Pentesterlab challenge: https://pentesterlab.com/exercises/ecdsa

I'm wondering who can give some lights on how to resolve some steps in this challenge. You can read about similar challenge there - https://ropnroll.co.uk/2017/05/breaking-ecdsa/

I suppose I have problems with extracting (r,s) from ESDCA (SECP256k1) signature (here details - https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm)

I even try to brute-force all possible (r,s) values but no luck. Every time I receive error 500.

def recover_key(c1, sig1, c2, sig2, r_len, s_len): n = SECP256k1.order cookies = {} for s_idx in range(s_len, s_len + 2): for r_idx in range(r_len, r_len + 2): s1 = string_to_number(sig1[0 - s_idx:]) s2 = string_to_number(sig2[0 - s_idx:]) # https://bitcoin.stackexchange.com/questions/58853/how-do-you-figure-out-the-r-and-s-out-of-a-signature-using-python r1 = string_to_number(sig1[0 - (s_idx + r_idx + 2):0 - (s_idx)]) r2 = string_to_number(sig2[0 - (s_idx + r_idx + 2):0 - (s_idx)]) z1 = string_to_number(sha2(c1)) z2 = string_to_number(sha2(c2)) # Find cryptographically secure random k = (((z1 - z2) % n) * inverse_mod((s1 - s2), n)) % n # k = len(login1) # Recover private key da1 = ((((s1 * k) % n) - z1) * inverse_mod(r1, n)) % n # da2 = ((((s2 * k) % n) - z2) * inverse_mod(r2, n)) % n # SECP256k1 is the Bitcoin elliptic curve sk = SigningKey.from_secret_exponent(da1, curve=SECP256k1, hashfunc=hashlib.sha256) # create the signature login_tgt = "admin" # Sign account login_hash = sha2(login_tgt) signature = sk.sign(login_hash, k=k) # Create signature key sig_dic_key = "r" + str(r_idx) + "s" + str(s_idx) try: # because who trusts python vk = sk.get_verifying_key() vk.verify(signature, login_hash) print(sig_dic_key, " - good signature") except BadSignatureError: print(sig_dic_key, " - BAD SIGNATURE")

Its very interesting challenge and I want to break ECDSA finally.

Thanks in advance

I've just released version 0.4 of Signatory: a digital signature library for Rust, with support for creating and verifying Ed25519 signatures.

Signatory's provider-based design exposes a common thread-and-trait-safe API to several popular Rust crates for creating digital signatures, including ed25519-dalek,*ring*, and sodiumoxide.

It also supports using YubiHSM2 devices to produce digital signatures, thereby providing a single API for producing signatures with either software or hardware-based backends.

Future releases will add support for ECDSA, using either the NIST P-256 (secp256r1) or the secp256k1 elliptic curve used by Bitcoin and several other cryptocurrencies.

Additional neat things to potentially support in future releases: the MacBook TouchBar's SEP as a biometrically authenticated way to produce ECDSA P-256 signatures!

submitted by bascule to rust [link] [comments]
Signatory's provider-based design exposes a common thread-and-trait-safe API to several popular Rust crates for creating digital signatures, including ed25519-dalek,

It also supports using YubiHSM2 devices to produce digital signatures, thereby providing a single API for producing signatures with either software or hardware-based backends.

Future releases will add support for ECDSA, using either the NIST P-256 (secp256r1) or the secp256k1 elliptic curve used by Bitcoin and several other cryptocurrencies.

Additional neat things to potentially support in future releases: the MacBook TouchBar's SEP as a biometrically authenticated way to produce ECDSA P-256 signatures!

Currently Bitcoin uses secp256k1 with the ECDSA algorithm. secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% ... Currently Bitcoin uses secp256k1 with the ECDSA algorithm, though the same curve with the same public/private keys can be used in some other algorithms such as Schnorr. secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a ... The following are 30 code examples for showing how to use ecdsa.SECP256k1(). These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar. You may also want to check out all available ... secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% faster than other curves if the implementation is sufficiently ... The set of parameters Bitcoin used is called secp256k1. It’s one of the Standards for Efficient Cryptogrpahy(SEC) or Standards for Efficient Cryptography Group . SEC or SECG is base on Elliptic Curve Digital Signature Algorithm(ECDSA).

[index] [2680] [43829] [35651] [39575] [43036] [8545] [31240] [49319] [23566] [28357]

我不是書商，而是寫書的人。 本書內容包括：古典密碼、基礎數論、資訊理論，對稱密鑰密碼系統、#rsa密碼、#非對稱密鑰密碼 系統與離散對數 ... Skip navigation Sign in. Search The third lecture covers elliptic curves and in particular secp256k1, the curve used by bitcoin. This curve is used for public keys and ECDSA, the digital signature algorithm of bitcoin. In questo video mostriamo come utilizzare in pratica la crittografia simmetrica AES256 per cifrare un file. Verrà mostrato anche come cifrarlo e firmarlo con... John Wagnon discusses the basics and benefits of Elliptic Curve Cryptography (ECC) in this episode of Lightboard Lessons. Check out this article on DevCentra...