Secp256k1 - Bitcoin Wiki

Technical: Confidential Transactions and Their Implementation Tradeoffs

As requested by estradata here: https://old.reddit.com/Bitcoin/comments/iylou9/what_are_some_of_the_latest_innovations_in_the/g6heez1/
It is a general issue that crops up at the extremes of cryptography, with quantum breaks being just one of the extremes of (classical) cryptography.

Computational vs Information-Theoretic

The dichotomy is between computationally infeasible vs informationally-theoretic infeasible. Basically:
Quantum breaks represent a possible reduction in computational infeasibility of certain things, but not information-theoretic infeasibility.
For example, suppose you want to know what 256-bit preimages map to 256-bit hashes. In theory, you just need to build a table with 2256 entries and start from 0x0000000000000000000000000000000000000000000000000000000000000000 and so on. This is computationally infeasible, but not information-theoretic infeasible.
However, suppose you want to know what preimages, of any size, map to 256-bit hashes. Since the preimages can be of any size, after finishing with 256-bit preimages, you have to proceed to 257-bit preimages. And so on. And there is no size limit, so you will literally never finish. Even if you lived forever, you would not complete it. This is information-theoretic infeasible.

Commitments

How does this relate to confidential transactions? Basically, every confidential transaction simply hides the value behind a homomorphic commitment. What is a homomorphic commitment? Okay, let's start with commitments. A commitment is something which lets you hide something, and later reveal what you hid. Until you reveal it, even if somebody has access to the commitment, they cannot reverse it to find out what you hid. This is called the "hiding property" of commitments. However, when you do reveal it (or "open the commitment"), then you cannot replace what you hid with some other thing. This is called the "binding property" of commitments.
For example, a hash of a preimage is a commitment. Suppose I want to commit to something. For example, I want to show that I can predict the future using the energy of a spare galaxy I have in my pocket. I can hide that something by hashing a description of the future. Then I can give the hash to you. You still cannot learn the future, because it's just a hash, and you can't reverse the hash ("hiding"). But suppose the future event occurs. I can reveal that I did, in fact, know the future. So I give you the description, and you hash it and compare it to the hash I gave earlier. Because of preimage resistance, I cannot retroactively change what I hid in the hash, so what I gave must have been known to me at the time that I gave you the commitment i..e. hash ("binding").

Homomorphic Commitments

A homomorphic commitment simply means that if I can do certain operations on preimages of the commitment scheme, there are certain operations on the commitments that would create similar ("homo") changes ("morphic") to the commitments. For example, suppose I have a magical function h() which is a homomorphic commitment scheme. It can hide very large (near 256-bit) numbers. Then if h() is homomorphic, there may be certain operations on numbers behind the h() that have homomorphisms after the h(). For example, I might have an operation <+> that is homomorphic in h() on +, or in other words, if I have two large numbers a and b, then h(a + b) = h(a) <+> h(b). + and <+> are different operations, but they are homomorphic to each other.
For example, elliptic curve scalars and points have homomorphic operations. Scalars (private keys) are "just" very large near-256-bit numbers, while points are a scalar times a standard generator point G. Elliptic curve operations exist where there is a <+> between points that is homomorphic on standard + on scalars, and a <*> between a scalar and a point that is homomorphic on standard * multiplication on scalars.
For example, suppose I have two large scalars a and b. I can use elliptic curve points as a commitment scheme: I can take a <*> G to generate a point A. It is hiding since nobody can learn what a is unless I reveal it (a and A can be used in standard ECDSA private-public key cryptography, with the scalar a as the private key and the point A as the public key, and the a cannot be derived even if somebody else knows A). Thus, it is hiding. At the same time, for a particular point A and standard generator point G, there is only one possible scalar a which when "multiplied" with G yields A. So scalars and elliptic curve points are a commitment scheme, with both hiding and binding properties.
Now, as mentioned there is a <+> operation on points that is homomorphic to the + operation on corresponding scalars. For example, suppose there are two scalars a and b. I can compute (a + b) <*> G to generate a particular point. But even if I don't know scalars a and b, but I do know points A = a <*> G and B = b <*> G, then I can use A <+> B to derive (a + b) <*> G (or equivalently, (a <*> G) <+> (b <*> G) == (a + b) <*> G). This makes points a homomorphic commitment scheme on scalars.

Confidential Transactions: A Sketch

This is useful since we can easily use the near-256-bit scalars in SECP256K1 elliptic curves to easily represent values in a monetary system, and hide those values by using a homomorphic commitment scheme. We can use the hiding property to prevent people from learning the values of the money we are sending and receiving.
Now, in a proper cryptocurrency, a normal, non-coinbase transaction does not create or destroy coins: the values of the input coins are equal to the value of the output coins. We can use a homomorphic commitment scheme. Suppose I have a transaction that consumes an input value a and creates two output values b and c. That is, a = b + c, i.e. the sum of all inputs a equals the sum of all outputs b and c. But remember, with a homomorphic commitment scheme like elliptic curve points, there exists a <+> operation on points that is homomorphic to the ordinary school-arithmetic + addition on large numbers. So, confidential transactions can use points a <*> G as input, and points b <*> G and c <*> G as output, and we can easily prove that a <*> G = (b <*> G) <+> (c <*> G) if a = b + c, without revealing a, b, or c to anyone.

Pedersen Commitments

Actually, we cannot just use a <*> G as a commitment scheme in practice. Remember, Bitcoin has a cap on the number of satoshis ever to be created, and it's less than 253 satoshis, which is fairly trivial. I can easily compute all values of a <*> G for all values of a from 0 to 253 and know which a <*> G corresponds to which actual amount a. So in confidential transactions, we cannot naively use a <*> G commitments, we need Pedersen commitments.
If you know what a "salt" is, then Pedersen commitments are fairly obvious. A "salt" is something you add to e.g. a password so that the hash of the password is much harder to attack. Humans are idiots and when asked to generate passwords, will output a password that takes less than 230 possibilities, which is fairly easy to grind. So what you do is that you "salt" a password by prepending a random string to it. You then hash the random string + password, and store the random string --- the salt --- together with the hash in your database. Then when somebody logs in, you take the password, prepend the salt, hash, and check if the hash matches with the in-database hash, and you let them log in. Now, with a hash, even if somebody copies your password database, the can't get the password. They're hashed. But with a salt, even techniques like rainbow tables make a hacker's life even harder. They can't hash a possible password and check every hash in your db for something that matches. Instead, if they get a possible password, they have to prepend each salt, hash, then compare. That greatly increases the computational needs of a hacker, which is why salts are good.
What a Pedersen commitment is, is a point a <*> H, where a is the actual value you commit to, plus <+> another point r <*> G. H here is a second standard generator point, different from G. The r is the salt in the Pedersen commitment. It makes it so that even if you show (a <*> H) <+> (r <*> G) to somebody, they can't grind all possible values of a and try to match it with your point --- they also have to grind r (just as with the password-salt example above). And r is much larger, it can be a true near-256-bit number that is the range of scalars in SECP256K1, whereas a is constrained to "reasonable" numbers of satoshi, which cannot exceed 21 million Bitcoins.
Now, in order to validate a transaction with input a and outputs b and c, you only have to prove a = b + c. Suppose we are hiding those amounts using Pedersen commitments. You have an input of amount a, and you know a and r. The blockchain has an amount (a <*> H) <+> (r <*> G). In order to create the two outputs b and c, you just have to create two new r scalars such that r = r[0] + r[1]. This is trivial, you just select a new random r[0] and then compute r[1] = r - r[0], it's just basic algebra.
Then you create a transaction consuming the input (a <*> H) <+> (r <*> G) and outputs (b <*> H) <+> (r[0] <*> G) and (c <*> H) <+> (r[1] <*> G). You know that a = b + c, and r = r[0] + r[1], while fullnodes around the world, who don't know any of the amounts or scalars involved, can just take the points (a <*> H) <+> (r <*> G) and see if it equals (b <*> H) <+> (r[0] <*> G) <+> (c <*> H) <+> (r[1] <*> G). That is all that fullnodes have to validate, they just need to perform <+> operations on points and comparison on points, and from there they validate transactions, all without knowing the actual values involved.

Computational Binding, Information-Theoretic Hiding

Like all commitments, Pedersen Commitments are binding and hiding.
However, there are really two kinds of commitments:
What does this mean? It's just a measure of how "impossible" binding vs hiding is. Pedersen commitments are computationally binding, meaning that in theory, a user of this commitment with arbitrary time and space and energy can, in theory, replace the amount with something else. However, it is information-theoretic hiding, meaning an attacker with arbitrary time and space and energy cannot figure out exactly what got hidden behind the commitment.
But why?
Now, we have been using a and a <*> G as private keys and public keys in ECDSA and Schnorr. There is an operation <*> on a scalar and a point that generates another point, but we cannot "revrese" this operation. For example, even if I know A, and know that A = a <*> G, but do not know a, I cannot derive a --- there is no operation between A G that lets me know a.
Actually there is: I "just" need to have so much time, space, and energy that I just start counting a from 0 to 2256 and find which a results in A = a <*> G. This is a computational limit: I don't have a spare universe in my back pocket I can use to do all those computations.
Now, replace a with h and A with H. Remember that Pedersen commitments use a "second" standard generator point. The generator points G and H are "not really special" --- they are just random points on the curve that we selected and standardized. There is no operation H G such that I can learn h where H = h <*> G, though if I happen to have a spare universe in my back pocket I can "just" brute force it.
Suppose I do have a spare universe in my back pocket, and learn h = H G such that H = h <*> G. What can I do in Pedersen commitments?
Well, I have an amount a that is committed to by (a <*> H) <+> (r <*> G). But I happen to know h! Suppose I want to double my money a without involving Elon Musk. Then:
That is what we mean by computationally binding: if I can compute h such that H = h <*> G, then I can find another number which opens the same commitment. And of course I'd make sure that number is much larger than what I originally had in that address!
Now, the reason why it is "only" computationally binding is that it is information-theoretically hiding. Suppose somebody knows h, but has no money in the cryptocurrency. All they see are points. They can try to find what the original amounts are, but because any amount can be mapped to "the same" point with knowledge of h (e.g. in the above, a and 2 * a got mapped to the same point by "just" replacing the salt r with r - a * h; this can be done for 3 * a, 4 * a etc.), they cannot learn historical amounts --- the a in historical amounts could be anything.
The drawback, though, is that --- as seen above --- arbitrary inflation is now introduced once somebody knows h. They can multiply their money by any arbitrary factor with knowledge of h.
It is impossible to have both perfect hiding (i.e. historical amounts remain hidden even after a computational break) and perfect binding (i.e. you can't later open the commitment to a different, much larger, amount).
Pedersen commitments just happen to have perfect hiding, but only computationally-infeasible binding. This means they allow hiding historical values, but in case of anything that allows better computational power --- including but not limited to quantum breaks --- they allow arbitrary inflation.

Changing The Tradeoffs with ElGamal Commitments

An ElGamal commitment is just a Pedersen commitment, but with the point r <*> G also stored in a separate section of the transaction.
This commits the r, and fixes it to a specific value. This prevents me from opening my (a <*> H) <+> (r <*> G) as ((2 * a) <*> H) <+> ((r - a * h) <*> G), because the (r - a * h) would not match the r <*> G sitting in a separate section of the transaction. This forces me to be bound to that specific value, and no amount of computation power will let me escape --- it is information-theoretically binding i.e. perfectly binding.
But that is now computationally hiding. An evil surveillor with arbitrary time and space can focus on the r <*> G sitting in a separate section of the transaction, and grind r from 0 to 2256 to determine what r matches that point. Then from there, they can negate r to get (-r) <*> G and add it to the (a <*> H) <+> (r <*> G) to get a <*> H, and then grind that to determine the value a. With massive increases in computational ability --- including but not limited to quantum breaks --- an evil surveillor can see all the historical amounts of confidential transactions.

Conclusion

This is the source of the tradeoff: either you design confidential transactions so in case of a quantum break, historical transactions continue to hide their amounts, but inflation of the money is now unavoidable, OR you make the money supply sacrosanct, but you potentially sacrifice amount hiding in case of some break, including but not limited to quantum breaks.
submitted by almkglor to Bitcoin [link] [comments]

[ANN] RustCrypto: `k256` and `p256` v0.2.0: pure Rust secp256k1 and NIST P-256 ECDH and ECDSA (no_std/embedded-friendly)

Announcing v0.4.0 releases of these RustCrypto elliptic curve crates:
(see also ecdsa v0.7 and p384 v0.3)
The major notable new features in these releases are:

Elliptic Curve Diffie-Hellman

Key exchange protocol which establishes a shared secret between two parties.

Elliptic Curve Digital Signature Algorithm

Pervasively used public-key scheme for authenticating messages.

Notes on this release

These crates contain experimental pure Rust implementations of scalafield arithmetic for the respective elliptic curves (secp256k1, NIST P-256). These implementations are new, unaudited, and haven't received much public scrutiny. We have explicitly labeled them as being at a "USE AT YOUR OWN RISK" level of maturity.
That said, these implementations utilize the best modern practices for this class of elliptic curves (complete projective formulas providing constant time scalar multiplication).
In particular:
This release has been a cross-functional effort, with contributions from some of the best Rust elliptic curve cryptography experts. I'd like to thank everyone who's contributed, and hope that these crates are useful, especially for embedded cryptography and cryptocurrency use cases.
EDIT: the version in the title is incorrect. The correct version is v0.4.0, unfortunately the title cannot be edited.
submitted by bascule to rust [link] [comments]

Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:

Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:

Gains x Streamr AMA Recap

https://preview.redd.it/o74jlxia8im51.png?width=1236&format=png&auto=webp&s=93eb37a3c9ed31dc3bf31c91295c6ee32e1582be
Thanks to everyone in our community who attended the GAINS AMA yesterday with, Shiv Malik. We were excited to see that so many people attended and gladly overwhelmed by the amount of questions we got from you on Twitter and Telegram. We decided to do a little recap of the session for anyone who missed it, and to archive some points we haven’t previously discussed with our community. Happy reading and thanks to Alexandre and Henry for having us on their channel!
What is the project about in a few simple sentences?
At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that one of the things I’m most excited about are Data Unions. With Data Unions anyone can join the data economy and start monetizing the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps.
Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand?
The best example of a Data Union is the first one that has been built out of our stack. It's called Swash and it's a browser plugin.
You can download it here: http://swashapp.io/
And basically it helps you monetize the data you already generate (day in day out) as you browse the web. It's the sort of data that Google already knows about you. But this way, with Swash, you can actually monetize it yourself. The more people that join the union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers.
Very interesting. What stage is the project/product at? It's live, right?
Yes. It's live. And the Data Union framework is in public beta. The Network is on course to be fully decentralized at some point next year.
How much can a regular person browsing the Internet expect to make for example?
So that's a great question. The answer is no one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn't available in high quality. So With a union of a few million people, everyone could be getting 20-50 dollars a year. But it'll take a few years at least to realise that growth. Of course Swash is just one data union amongst many possible others (which are now starting to get built out on our platform!)
With Swash, I believe they now have 3,000 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth.
I assume the data is anonymized btw?
Yes. And there in fact a few privacy protecting tools Swash supplys to its users.
How does Swash compare to Brave?
So Brave really is about consent for people's attention and getting paid for that. They don't sell your data as such.
Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT.
Of course it's Streamr that is powering Swash. And we're looking at powering other DUs - say for example mobile applications.
The holy grail might be having already existing apps and platforms out there, integrating DU tech into their apps so people can consent (or not) to having their data sold - and then getting a cut of that revenue when it does sell.
The other thing to recognise is that the big tech companies monopolise data on a vast scale - data that we of course produce for them. That is stifling innovation.
Take for example a competitor map app. To effectively compete with Google maps or Waze, they need millions of users feeding real time data into it.
Without that - it's like Google maps used to be - static and a bit useless.
Right, so how do you convince these big tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn't be able to monetize data as well on their end if it becomes more available through an aggregation of individuals?
If a map application does manage to scale to that level then inevitably Google buys them out - that's what happened with Waze.
But if you have a data union which bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly.
We're currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetization. And that's what even more exciting. Just be explicit with users - do you want to sell your data? Okay, if yes, then which data point do you want to sell.
Then the mobile network operator (like T-mobile for example) then organises the sale of the data of those who consent and everyone gets a cut.
Streamr - in this example provides the backend to port and bundle the data, and also the token and payment rail for the payments.
So for big companies (mobile operators in this case), it's less logistics, handing over the implementation to you, and simply taking a cut?
It's a vision that we'll be able to talk more about more concretely in a few weeks time 😁
Compared to having to make sense of that data themselves (in the past) and selling it themselves
Sort of.
We provide the backened to port the data and the template smart contracts to distribute the payments.
They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world.
(Through our sister company TX, we also help build out the applications for them and ensure a smooth integration).
The other thing to add is that the reason why this vision is working, is that the current data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to IoS14 to make third party data sharing more explicit for users.
All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through this method - from overt, rich, consent from the consumer in return for a cut of the earnings.
A couple of questions to get a better feel about Streamr as a whole now and where it came from. How many people are in the team? For how long have you been working on Streamr?
We are around 35 people with one office in Zug, Switzerland and another one in Helsinki. But there are team members all over the globe, we’ve people in the US, Spain, the UK, Germany, Poland, Australia and Singapore. I joined Streamr back in 2017 during the ICO craze (but not for that reason!)
And did you raise funds so far? If so, how did you handle them? Are you planning to do any future raises?
We did an ICO back in Sept/Oct 2017 in which we raised around 30 Millions CHF. The funds give us enough runway for around five/six years to finalize our roadmap. We’ve also simultaneously opened up a sister company consultancy business, TX which helps enterprise clients implementing the Streamr stack. We've got no more plans to raise more!
What is the token use case? How did you make sure it captures the value of the ecosystem you're building
The token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. ( we haven't talked much about the P2P network but it's our project's secret sauce).
The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with Blockscience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs.
But if you want to summate the Network in a sentence or two - imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it's realtime data streams.
That of course means it's really well suited for the IoT economy.
Well, let's continue with questions from Twitter and this one comes at the perfect time. Can Streamr Network be used to transfer data from IOT devices? Is the network bandwidth sufficient? How is it possible to monetize the received data from a huge number of IOT devices? From u/ EgorCypto
Yes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed - the Streamr team just recently did extensive research to find out how well the network scales.
The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we're super happy with those results.
Here's a link to the paper:
https://medium.com/streamrblog/streamr-network-performance-and-scalability-whitepaper-adb461edd002
While we're on the technical side, second question from Twitter: Can you be sure that valuable data is safe and not shared with service providers? Are you using any encryption methods? From u/ CryptoMatvey
Yes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release - our last milestone on the roadmap - when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data.
If BTW - you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum).
Last question from Twitter, less technical now :) In their AMA ad, they say that Streamr has three unions, Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand? From u/ alej_pacedo
So yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines.
WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivized to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable.
So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products.
(I think they had a call this week about a use case involving cows 😂)
I recall late last year, that the Streamr Data Union framework was launched into private beta, now public beta was recently released. What are the differences? Any added new features? By u/ Idee02
The main difference will be that the DU 2.0 release will be more reliable and also more transparent since the sidechain we are using for micropayments is also now based on blockchain consensus (PoA).
Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?by u/ Andromedamin
We're all about what's under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all!
As a project how do you consider the progress of the project to fully developed (in % of progress plz) by u/ Hash2T
We're about 60% through I reckon!
What tools does Streamr offer developers so that they can create their own DApps and monetize data?What is Streamr Architecture? How do the Ethereum blockchain and the Streamr network and Streamr Core applications interact? By u/ CryptoDurden
We'll be releasing the Data UNion framework in a few weeks from now and I think DApp builders will be impressed with what they find.
We all know that Blockchain has many disadvantages as well,
So why did Streamr choose blockchain as a combination for its technology?
What's your plan to merge Blockchain with your technologies to make it safer and more convenient for your users? By u/ noonecanstopme
So we're not a blockchain ourselves - that's important to note. The P2P network only uses BC tech for the payments. Why on earth for example would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain.
So we think we got the mix right there.
What were the requirements needed for node setup ? by u/ John097
Good q - we're still working on that but those specs will be out in the next release.
How does the STREAMR team ensure good data is entered into the blockchain by participants? By u/ kartika84
Another great Q there! From the product buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network - if that is what you also mean - is all about getting the architecture right. In a decentralised network, that's not easy as data points in streams have to arrive in the right order. It's one of the biggest challenges but we think we're solving it in a really decentralised way.
What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case? By u/ JP_Morgan_Chase
There are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace.
Regarding security and legality, how does STREAMR guarantee that the data uploaded by a given user belongs to him and he can monetize and capitalize on it? By u/ kherrera22
So that's a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this but it's a long one!
Okay thank you all for all of those!
The AMA took place in the GAINS Telegram group 10/09/20. Answers by Shiv Malik.
submitted by thamilton5 to streamr [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to btc [link] [comments]

The NSA Backdoor question....

So over a lunch conversation with a nocoiner I got what i thought sounded like a far fetched conspiracy theory, went a little like this... "Mmm yeah I reckon the NSA has a backdoor code that would allow them access to any private key simply by looking at the public key" ... I'm used to the "Bitcoin will be hacked" arguement but this seems to be on a different level.
Thoughts? It's a new one for me for sure and not sure how to answer it....
submitted by hungdoge to Bitcoin [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to Bitcoin [link] [comments]

Can a quantum computer be used for bitcoin mining?

This has been bothering me for a while.
I'm a newbie in computer science, and I just found out about Grover’s algorithm, which can only be implemented on a quantum computer. Supposedly it can achieve a quadratic speedup over a classical computer, brute-forcing a solution to a n-bit symmetric encryption key in 2^n/2 iterations.
This led me to think that, by utilizing a quantum computer or quantum simulator of about 40-qubits that runs Grover's algorithm, is it possible to mine bitcoins this way? The current difficulty of bitcoin mining is about 15,466,098,935,554 (approximately 2^44), which means that it would take about 2^44*2^32=2^76 SHA256 hashes before a valid block header hash is found.
However, by implementing Grover's algorithm, we would only need to sort through 2^76/2=2^38 hashes to discover a valid block header hash. A 38-qubit quantum computer should be sufficient in this case - which means the 40-qubit quantum computer should be more than enough to handle bitcoin mining.
Therefore - is it possible to use quantum computers to mine bitcoins this way? I'm not too familiar with quantum computers, so please correct me if I missed something.......
NOTE: I am NOT asking whether it is possible to use quantum computers to break the ECDSA secp256k1 algorithm, which would effectively allow anyone to steal bitcoins from wallets. I know that this would require much more than 40 qubits, and is definitely not happening in the near-future.
Rather, I'm asking about bitcoin mining, which is a much easier problem than trying to break ECDSA secp256k1.
submitted by Palpatine88888 to QuantumComputing [link] [comments]

Technical: Upcoming Improvements to Lightning Network

Price? Who gives a shit about price when Lightning Network development is a lot more interesting?????
One thing about LN is that because there's no need for consensus before implementing things, figuring out the status of things is quite a bit more difficult than on Bitcoin. In one hand it lets larger groups of people work on improving LN faster without having to coordinate so much. On the other hand it leads to some fragmentation of the LN space, with compatibility problems occasionally coming up.
The below is just a smattering sample of LN stuff I personally find interesting. There's a bunch of other stuff, like splice and dual-funding, that I won't cover --- post is long enough as-is, and besides, some of the below aren't as well-known.
Anyway.....

"eltoo" Decker-Russell-Osuntokun

Yeah the exciting new Lightning Network channel update protocol!

Advantages

Myths

Disadvantages

Multipart payments / AMP

Splitting up large payments into smaller parts!

Details

Advantages

Disadvantages

Payment points / scalars

Using the magic of elliptic curve homomorphism for fun and Lightning Network profits!
Basically, currently on Lightning an invoice has a payment hash, and the receiver reveals a payment preimage which, when inputted to SHA256, returns the given payment hash.
Instead of using payment hashes and preimages, just replace them with payment points and scalars. An invoice will now contain a payment point, and the receiver reveals a payment scalar (private key) which, when multiplied with the standard generator point G on secp256k1, returns the given payment point.
This is basically Scriptless Script usage on Lightning, instead of HTLCs we have Scriptless Script Pointlocked Timelocked Contracts (PTLCs).

Advantages

Disadvantages

Pay-for-data

Ensuring that payers cannot access data or other digital goods without proof of having paid the provider.
In a nutshell: the payment preimage used as a proof-of-payment is the decryption key of the data. The provider gives the encrypted data, and issues an invoice. The buyer of the data then has to pay over Lightning in order to learn the decryption key, with the decryption key being the payment preimage.

Advantages

Disadvantages

Stuckless payments

No more payments getting stuck somewhere in the Lightning network without knowing whether the payee will ever get paid!
(that's actually a bit overmuch claim, payments still can get stuck, but what "stuckless" really enables is that we can now safely run another parallel payment attempt until any one of the payment attempts get through).
Basically, by using the ability to add points together, the payer can enforce that the payee can only claim the funds if it knows two pieces of information:
  1. The payment scalar corresponding to the payment point in the invoice signed by the payee.
  2. An "acknowledgment" scalar provided by the payer to the payee via another communication path.
This allows the payer to make multiple payment attempts in parallel, unlike the current situation where we must wait for an attempt to fail before trying another route. The payer only needs to ensure it generates different acknowledgment scalars for each payment attempt.
Then, if at least one of the payment attempts reaches the payee, the payee can then acquire the acknowledgment scalar from the payer. Then the payee can acquire the payment. If the payee attempts to acquire multiple acknowledgment scalars for the same payment, the payer just gives out one and then tells the payee "LOL don't try to scam me", so the payee can only acquire a single acknowledgment scalar, meaning it can only claim a payment once; it can't claim multiple parallel payments.

Advantages

Disadvantages

Non-custodial escrow over Lightning

The "acknowledgment" scalar used in stuckless can be reused here.
The acknowledgment scalar is derived as an ECDH shared secret between the payer and the escrow service. On arrival of payment to the payee, the payee queries the escrow to determine if the acknowledgment point is from a scalar that the escrow can derive using ECDH with the payer, plus a hash of the contract terms of the trade (for example, to transfer some goods in exchange for Lightning payment). Once the payee gets confirmation from the escrow that the acknowledgment scalar is known by the escrow, the payee performs the trade, then asks the payer to provide the acknowledgment scalar once the trade completes.
If the payer refuses to give the acknowledgment scalar even though the payee has given over the goods to be traded, then the payee contacts the escrow again, reveals the contract terms text, and requests to be paid. If the escrow finds in favor of the payee (i.e. it determines the goods have arrived at the payer as per the contract text) then it gives the acknowledgment scalar to the payee.

Advantages

Disadvantages

Payment decorrelation

Because elliptic curve points can be added (unlike hashes), for every forwarding node, we an add a "blinding" point / scalar. This prevents multiple forwarding nodes from discovering that they have been on the same payment route. This is unlike the current payment hash + preimage, where the same hash is used along the route.
In fact, the acknowledgment scalar we use in stuckless and escrow can simply be the sum of each blinding scalar used at each forwarding node.

Advantages

Disadvantages

submitted by almkglor to Bitcoin [link] [comments]

Upcoming Updates to Bitcoin Consensus

Price and Libra posts are shit boring, so let's focus on a technical topic for a change.
Let me start by presenting a few of the upcoming Bitcoin consensus changes.
(as these are consensus changes and not P2P changes it does not include erlay or dandelion)
Let's hope the community strongly supports these upcoming updates!

Schnorr

The sexy new signing algo.

Advantages

Disadvantages

MuSig

A provably-secure way for a group of n participants to form an aggregate pubkey and signature. Creating their group pubkey does not require their coordination other than getting individual pubkeys from each participant, but creating their signature does require all participants to be online near-simultaneously.

Advantages

Disadvantages

Taproot

Hiding a Bitcoin SCRIPT inside a pubkey, letting you sign with the pubkey without revealing the SCRIPT, or reveal the SCRIPT without signing with the pubkey.

Advantages

Disadvantages

MAST

Encode each possible branch of a Bitcoin contract separately, and only require revelation of the exact branch taken, without revealing any of the other branches. One of the Taproot script versions will be used to denote a MAST construction. If the contract has only one branch then MAST does not add more overhead.

Advantages

Disadvantages

submitted by almkglor to Bitcoin [link] [comments]

Creating a public key from a private key using secp256k1 library in C++

Hello,
I'm trying to generate a public key from a private one using the bitcoin library secp256k1 https://github.com/bitcoin-core/secp256k1 in C++:

 // private key std::string privateKey = "baca891f5f0285e043496843d82341d15533f016c223d114e1e4dfd39e60ecb0"; const char* cPrivateKey = privateKey.c_str(); // creating public key from it secp256k1_context *signingContext = secp256k1_context_create(SECP256K1_CONTEXT_SIGN); secp256k1_pubkey pkey; unsigned char *seckey = (unsigned char*) malloc(privateKey.size() * sizeof(unsigned char)); std::copy(cPrivateKey, cPrivateKey + privateKey.size(), seckey); if (secp256k1_ec_pubkey_create(signingContext, &pkey, seckey) == 0) throw "Creation error"; // print the result in hex std::stringstream ss; for (int i = 0; i < 64; ++i) { ss << std::setw(2) << std::hex << (0xff & (unsigned int)pkey.data[i]); } std::cout << ss.str() << std::endl; // parsing the result std::string pkey_data = ss.str(); secp256k1_context *noneContext = secp256k1_context_create(SECP256K1_CONTEXT_NONE); secp256k1_pubkey pubkey; if (secp256k1_ec_pubkey_parse(noneContext, &pubkey, pkey.data, 64) == 0) std::cout << "Couldnt parse using pkey.data" << std::endl; if (secp256k1_ec_pubkey_parse(noneContext, &pubkey, pkay_data, pkey_data.size()) == 0) std::cout << "Couldnt parse using hex public key " << std::endl; 
The output:
1b9e55408c5141414e8337adef57ead18f62444fdf5f8c897d0dc812696a6141a919254d3e750075a2a9ba32dc4ed30c84e65f27e431b59b94a2aafe3e80a974 Couldnt parse using pkey.data Couldnt parse using hex public key 
The idea is to parse the public key from the generated one to see if it is really correct. Also, i tried using openssl with the same private key and i get a different public key from the private one using ecdsa secp256k1.. Any help or example on how to use the library in C++ is more than welcome.
Thank you
submitted by chizisch to Bitcoin [link] [comments]

NSA Suite B Cryptography - NSA deprecates P-256 of ECDSA (Bitcoin uses P-256), in favor of Curve P-384

NSA Suite B Cryptography - NSA deprecates P-256 of ECDSA (Bitcoin uses P-256), in favor of Curve P-384 submitted by eragmus to Bitcoin [link] [comments]

CIA PROJECT • BITCOIN ECC Crypto Creator Neal Koblitz LIES about NSA Funding

submitted by ennisaorg to conspiracy [link] [comments]

How the core developers are working to further scale Bitcoin.

Following the implementation of Segwit, it's time to talk about Schnorr signatures and how they are going to aid in scaling Bitcoin. Segwit works by altering the composition of Bitcoin blocks. It moves the signature data to another part of the block, hence the name "Segregated Witness". Now that the signature data has been reorganized, Schnorr signatures can be applied to them, where they are signed at once rather than individually. According to coindesk, "Under the ECDSA scheme, each piece of a bitcoin transaction is signed individually, while with Schnorr signatures, all of this data can be signed once"[3].
What are Schnorr Signatures?
How is this going to change/improve Bitcoin?
How are these changes going to implemented?
How is the work coming along?
The meat and potatoes of the matter
The verification equation:
sG = R - H(R || P1 || C || m)P1 - H(R || P2 || C || m)P2 
The signature equation:
R = H(R || P1 || C || m)P1 + … + s*G 
Summations are fundamentally faster to compute on computers than multiplication. This is because each multiplication operation is the sum of the terms where the n is the number of terms. So 5 * 7 = 5 + 5 + 5 + 5 + 5 + 5 + 5 (or vice versa for 7). computers only have the capacity to perform two operations - addition and bitwise shifts. Algorithms that maximize the use of these operations are comparitively faster than algorithms that use multiplication, division, or modulus even when the time complexity of the two problems are equal. This is why "For 100000 keys, the speedup is approximately 8x"[1].
"And aggregation shrinks the transaction sizes by the amounts of inputs they have. In our paper, we went and ran numbers on the shrinkage based on existing transaction patterns, there was a 28% reduction based on the existing history"[1].
This is VERY SIGNIFICANT. The "aggregation shrinks the transaction sizes by the amounts of inputs they have"[1]. This means the "more inputs you have, the more you gain because you add a shared cost of paying for one signature"[1].
"Validation time regardless of how complex your script is. You just prove that the script existed, it is valid, and the hash. But anything that further complicates the structure of transaction. This helps fungibility, privacy, and bandwidth"[1].
What can you do to help?
Sources
[1] http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/
[2] https://github.com/bitcoin-core/secp256k1
[3] https://www.coindesk.com/just-segwit-bitcoin-core-already-working-new-scaling-upgrade/
Edit: Finally got formatting to work.
submitted by RussianHacker1011101 to Bitcoin [link] [comments]

Did you know that LISK uses Schnorr signature-based Ed25519 scheme which is more secure, much faster, more scalable than secp256k1 which is used by Bitcoin, Ethereum, Stratis

Schnorr signatures have been praised by Bitcoin developers for a while Adam Back admitted it was more secure
https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641
And it is much faster (scalable for verifying hundred thousands of transactions per second)
https://bitcointalk.org/index.php?topic=103172.0
DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable.
Gavin Andresen, the former Bitcoin Chief Scientist want to support it in Bitcoin
https://www.reddit.com/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfp3xj/
Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212
However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist
Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html
LISK is IoT friendly
The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic.
Advantages of Schnorr signatures
According to David Harding, Schnorr signatures can bring many benefits
Smaller multisig transactions
Slightly smaller for all transactions
Plausible deniability for multisig
Plausible deniability of authorized parties using a third-party organizer (which doesn't need to be trusted with private keys), it's possible to prevent signers from knowing whether their private key is part of the set of signing keys.
Theoretical better security properties: Also, the ed25519 page linked above describes several ways it is resistant to side-channel attacks, which can allow hardware wallets to operate safely in less secure environments.
Faster signature verification: it likely takes fewer CPU cycles to verify an ed25519 Schnorr signature than a secp256k1 ECDSA signature.
Multi-crypto multisig: with two (slightly) different cryptosystems to choose from, high-security users can create 2-of-2 multisig pubkey scripts that require both ECDSA and Schnorr signatures, so their bitcoins can't be stolen if only one cryptosystem is broken.
https://bitcoin.stackexchange.com/questions/34288/what-are-the-implications-of-schnorr-signatures
Scalable multisig transactions
The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.) Infamous malleability is non-issue in LISK Provably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/
Bitcoin has malleability bugs
submitted by Corinne1992 to CryptoCurrency [link] [comments]

Investigating the work behind bitcoin: An history of Schnorr signatures within Bitcoin.

Introduction

Schnorr signatures are currently on the Bitcoin core roadmap and an implementation was supposed to be released before the end of this year. Being a mathematician I have been inquiring about Schnorr signature, the math behind it and its implications for bitcoin if it is ever implemented. This post is a list of links if anyone also wants source on the subject.
TLDR: To sum it up, Schnorr signature were introduced first as a potential optimization (batch verifications) and then as a possible scheme for signature aggregation. None of this has been implemented yet as many theoretical issue remain. To know more on these issues and on what “signature aggregation means” please refer to the links in this post.

An history of Schnorr signature within Bitcoin

As you may or may not know:
Schnorr algorithm has long been at the top of the wish list for many Bitcoin developers.
And indeed, it has been a long time... Are they top priority for Bitcoin core? I do not know, but they seem to be pretty high up on the priority list. Here is a quick timeline:
  1. Hal Finney talks about speeding up signature verification by implementing “batch signature verification”. This does not refer to Schnorr, but it is the starting point. February 2011.
  2. Mike Hearn elaborates on “batch signature verification”. He mentions a paper by the famous cryptograph D. Bernstein which successfully implemented such batch verification by using the twisted Edwards curve Ed25519 which relies on Schnorr signatures. August 2012.
  3. An anonymous user released a white paper proposing Boneh–Lynn–Shacham in order to implement signature aggregation. September 2013.
  4. Adam Back talks about his preference for Schnorr signatures over ECDSA due to the possible signature aggregation. October 2013.
  5. Gregory Maxwell and Adam Back talk about Schnorr signatures natively supporting multisig. March 2014.
  6. Gavin Andresen mentions it on his wish list in October 2014.
  7. Here is pretty good summary on Schnorr signatures advantages by David Harding. January 2015.
  8. Gregory Maxwell mentions it (quite negatively I might add) during his talk at the SF Bitcoin Devs Seminar in April 2015. Once again the reference is related to multisig and signature aggregation (from minute mark 20 to 40 ish).
  9. Pieter Wuille and Gregory Maxwell wrote a Schnorr API which was committed on July/August 2015. The latest change date from December 2015 and regards documentation.
  10. Gregory Maxwell referenced the post#3 of this list as a starting point to justify the implementation of Schnorr signatures. His justifications are towards batch verification and signature aggregation. February 2016.
  11. Pieter Wuille talks at Scaling Bitcoin 2016 Milan about Schnorr signatures, the history, the advantages and the problems they face. October 2016 (from minute mark 38 to 1H05 ish).
  12. Bitcoin Core technology roadmap announcing an upcoming whitepaper on Schnorr signature but also a BIP which would be announced by the end of 2017. March 2017.
  13. Pieter Wuille says that there will be a concrete proposal and implementation in 2018. November 2017.
Edit: formatting.
submitted by Azeroth7 to btc [link] [comments]

Did you know that LISK uses Schnorr signature-based Ed25519 scheme which is more secure, much faster, more scalable than secp256k1 which is used by Bitcoin, Ethereum, Stratis

Schnorr signatures have been praised by Bitcoin developers for a while
Adam Back admitted it was more secure
https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641
And it is much faster (scalable for verifying hundred thousands of transactions per second)
https://bitcointalk.org/index.php?topic=103172.0
DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable.
Gavin Andresen, the former Bitcoin Chief Scientist want to support it in Bitcoin https://www.reddit.com/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfp3xj/
Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212
However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist
Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html
LISK is IoT friendly
The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic.
Advantages of Schnorr signatures
According to David Harding, Schnorr signatures can bring many benefits
https://bitcoin.stackexchange.com/questions/34288/what-are-the-implications-of-schnorr-signatures
Scalable multisig transactions
The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.)
Infamous malleability is non-issue in LISK
Provably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/
Bitcoin has malleability bugs
submitted by pcdinh to Lisk [link] [comments]

About Hetachain Technology

About Hetachain Technology

https://preview.redd.it/vvnngd5ijd021.png?width=720&format=png&auto=webp&s=c684476c75b93de4b72c0657f2e432acbdbb1518
Blockchain technology, which can simplify peer-to-peer transactions without the need for a third-party intermediary, can disrupt many existing centralized systems and applications, especially in the banking and financial sectors. Combined with powerful intellectual contracts, blockchain technology has the potential to change the dynamics of the economy around the world. A truly decentralized ecosystem takes control of several self-service organizations and returns them back to the masses.
Blockchain technology is a truly innovative innovation in how to create, share and edit data. But despite this, blockchain technology currently faces critical performance issues. For this reason, Hetachain proposed to create the latest Blockchain network architecture, which will be used very easily, more flexibly for users and developers using the blockchain platform, which has very high performance.
A cryptographic algorithm using ECDSA (digital elliptic motion algorithm) with a secp256k1 curve for public-private cryptography. The private key is 256-bit random data. https://heta.org/
The HETA address associated with this secret key is the latest 160 bit SHA3-256 (Keccak) hash from key state adoption issues
However, the speed of making block decisions for solving real problems and replacing existing systems was slow. The root cause may be related to the scalability issue that affects bitcoin, ethereum, and other locks. Both bitcoin and Essential Blokhin were faced with network overload problems at the end of last year. Slow transactions and excessively high fees threatened to defeat the purpose for which the technology was invented. Even decentralized applications (DAPPS), built on existing blockchains, are usually tied to performance problems without giving users the convenience and ease of use they are used to. There is a need for a blockchain, which is designed to provide the benefits of technology, and also scalable enough to handle the massive traffic of existing systems.
Hetachain: solution
Hetachain intends to solve the above problems and create a high-performance blockchain that scales, facilitates easy, smart contract creation and is easy to use by end users. This is achieved through a hybrid consensus mechanism based on delegated share proof (DPoS) and Byzantine resiliency (BFT). The goal is to create a block chain that is scalable, has a shorter block time and high bandwidth. Using the Hetachain algorithm, a block is created every 1 second and checked using a single maze in the network.
submitted by FamiliarSleep to CryptoMangust [link] [comments]

Transaction malleability in Cryptonote/Monero

Question for the Monero devs, or somebody who already went deeply into Cryptonote technical aspects.
As far as I know, the reasons for which a bitcoin transaction can be malleated is due to either a different implementation of the ECDSA signature scheme (with trailing zeros or not), and the (relatively complex) scripting language used to redeem outputs, were essentially in some cases the same logic can be obtained with a different script content (thus changing the tx hash).
Cryptonote is using a different signature scheme, and does not use the scripting language at all. I believe it makes it less prone to transaction malleability than bitcoin. My question is: is it actually completely imune to it or not?
submitted by binaryFate to Monero [link] [comments]

Encrypt/Decrypt Message tool in Electron Cash

Encrypt/Decrypt Message tool in Electron Cash
Hi all, I have a question about the Encrypt/Decrypt Message tool in Electron Cash
I knew that I can use a public key to encrypt some plain text and decrypt with the corresponding private key

For example,
Public key: 03c8803b937bf4970fdcec5f43295c3adf4731aaa9287bd58612089480c096e8bd
message: hello-world-123
With the tool, I got the encrtyped text QklFMQKWrqTZYtpihiXAC/0HcPgZMlqoWxy3aREvX+jwKyAmMy93Y4uwz0bIDgQAu9A8RTv2SRy45mQOM5ryl7HiMr1NjVNNGAdKsv0+sMAq3IqV2g==
https://preview.redd.it/d19n3ff9yu421.png?width=1220&format=png&auto=webp&s=94e54f6f13a94c9fd333b87e7d8ef3c282b06ed0
When I click the "Encrypt" button again, I got a brand new encrypted text QklFMQMA0EA3pYj7FD1+TPh0qqUb6QmO6fr+qs58jvrBWND9hCA7EzQr68TpgBnLrq2Oj0n00YSYOdFnMxa52pve3JKDtsxEuLTL1+/ck2MDNcFp/g==
https://preview.redd.it/qyveey4qyu421.png?width=1220&format=png&auto=webp&s=e651d23357c1468576a962be46828517e65afc0a
This happens again and again, and I can decrypt all these different encrypted texts...

I knew that, Bitcoin and Bitcoin Cash
- use ECC, to be more specific, the secp256k1 curve, to generate public and private key pairs
- use ECDSA, aka Elliptic Curve Digital Signature Algorithm, to sign and verify messages
- did some google, the algorithm used in encrypt/decrypt text, has a name ECIES
Am I right?

For the encrypt and decrypt part, why the encrypted text changes every time...? What on earth happens inside this operation?

I also found a GitHub repository about the ECIES, it's from bitpay
https://github.com/bitpay/bitcore-ecies

But I am quite confused with its example
https://bitcore.io/api/ecies/
https://preview.redd.it/s3eobfp20v421.png?width=1514&format=png&auto=webp&s=3988dfdf00bae529e49aca36cd09adb22215326f
When we encrypt data, we should not need anyone's private keys, right?
If I wanna implement the encrypt/decrypt part, is there any libraries that I can use? Any recommendations?

Sorry to bother, but quite desire to know the truth.
Does anyone has an knowledge on this...? Thanks very much for helping and teaching me
submitted by aaron67_cc to electroncash [link] [comments]

Is a Bitcoin signature derived just from a private key?

Reading through the dev guide.
I'm on transactions. https://bitcoin.org/en/developer-guide#transactions
It says:
"An secp256k1 signature made by using the ECDSA cryptographic formula to combine certain transaction data (described below) with Bob’s private key. This lets the pubkey script verify that Bob owns the private key which created the public key."
But the diagram below it suggests that the signature is derived from the private key + the Pubkey Script.
So is the signature derived just from the private key?
Or is it the private key + the pubkey script?
submitted by ecurrencyhodler to Bitcoin [link] [comments]

Pentesterlab. ECDSA challenge

Hi there,

I am struggling with Pentesterlab challenge: https://pentesterlab.com/exercises/ecdsa

I'm wondering who can give some lights on how to resolve some steps in this challenge. You can read about similar challenge there - https://ropnroll.co.uk/2017/05/breaking-ecdsa/
I suppose I have problems with extracting (r,s) from ESDCA (SECP256k1) signature (here details - https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm)

I even try to brute-force all possible (r,s) values but no luck. Every time I receive error 500.

def recover_key(c1, sig1, c2, sig2, r_len, s_len): n = SECP256k1.order cookies = {} for s_idx in range(s_len, s_len + 2): for r_idx in range(r_len, r_len + 2): s1 = string_to_number(sig1[0 - s_idx:]) s2 = string_to_number(sig2[0 - s_idx:]) # https://bitcoin.stackexchange.com/questions/58853/how-do-you-figure-out-the-r-and-s-out-of-a-signature-using-python r1 = string_to_number(sig1[0 - (s_idx + r_idx + 2):0 - (s_idx)]) r2 = string_to_number(sig2[0 - (s_idx + r_idx + 2):0 - (s_idx)]) z1 = string_to_number(sha2(c1)) z2 = string_to_number(sha2(c2)) # Find cryptographically secure random k = (((z1 - z2) % n) * inverse_mod((s1 - s2), n)) % n # k = len(login1) # Recover private key da1 = ((((s1 * k) % n) - z1) * inverse_mod(r1, n)) % n # da2 = ((((s2 * k) % n) - z2) * inverse_mod(r2, n)) % n # SECP256k1 is the Bitcoin elliptic curve sk = SigningKey.from_secret_exponent(da1, curve=SECP256k1, hashfunc=hashlib.sha256) # create the signature login_tgt = "admin" # Sign account login_hash = sha2(login_tgt) signature = sk.sign(login_hash, k=k) # Create signature key sig_dic_key = "r" + str(r_idx) + "s" + str(s_idx) try: # because who trusts python vk = sk.get_verifying_key() vk.verify(signature, login_hash) print(sig_dic_key, " - good signature") except BadSignatureError: print(sig_dic_key, " - BAD SIGNATURE") 

Its very interesting challenge and I want to break ECDSA finally.
Thanks in advance
submitted by unk1nd0n3 to webappsec [link] [comments]

Announcing Signatory 0.4: a multi-provider digital signature library for Rust

I've just released version 0.4 of Signatory: a digital signature library for Rust, with support for creating and verifying Ed25519 signatures.
Signatory's provider-based design exposes a common thread-and-trait-safe API to several popular Rust crates for creating digital signatures, including ed25519-dalek, ring, and sodiumoxide.
It also supports using YubiHSM2 devices to produce digital signatures, thereby providing a single API for producing signatures with either software or hardware-based backends.
Future releases will add support for ECDSA, using either the NIST P-256 (secp256r1) or the secp256k1 elliptic curve used by Bitcoin and several other cryptocurrencies.
Additional neat things to potentially support in future releases: the MacBook TouchBar's SEP as a biometrically authenticated way to produce ECDSA P-256 signatures!
submitted by bascule to rust [link] [comments]

The Elliptic Curve Digital Signature Algorithm and raw transactions on Bitcoin Elliptic Curve Cryptography Overview - YouTube Introduction to Bitcoin with Yours Bitcoin, Lecture 3: Elliptic Curves 資訊安全&密碼學cryptography - YouTube Elliptic Curve Digital Signature Algorithm (ECDSA) - Public Key Cryptography w/ JAVA (tutorial 10)

Currently Bitcoin uses secp256k1 with the ECDSA algorithm. secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% ... Currently Bitcoin uses secp256k1 with the ECDSA algorithm, though the same curve with the same public/private keys can be used in some other algorithms such as Schnorr. secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a ... The following are 30 code examples for showing how to use ecdsa.SECP256k1(). These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar. You may also want to check out all available ... secp256k1 was almost never used before Bitcoin became popular, but it is now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% faster than other curves if the implementation is sufficiently ... The set of parameters Bitcoin used is called secp256k1. It’s one of the Standards for Efficient Cryptogrpahy(SEC) or Standards for Efficient Cryptography Group . SEC or SECG is base on Elliptic Curve Digital Signature Algorithm(ECDSA).

[index] [2680] [43829] [35651] [39575] [43036] [8545] [31240] [49319] [23566] [28357]

The Elliptic Curve Digital Signature Algorithm and raw transactions on Bitcoin

我不是書商,而是寫書的人。 本書內容包括:古典密碼、基礎數論、資訊理論,對稱密鑰密碼系統、#rsa密碼、#非對稱密鑰密碼 系統與離散對數 ... Skip navigation Sign in. Search The third lecture covers elliptic curves and in particular secp256k1, the curve used by bitcoin. This curve is used for public keys and ECDSA, the digital signature algorithm of bitcoin. In questo video mostriamo come utilizzare in pratica la crittografia simmetrica AES256 per cifrare un file. Verrà mostrato anche come cifrarlo e firmarlo con... John Wagnon discusses the basics and benefits of Elliptic Curve Cryptography (ECC) in this episode of Lightboard Lessons. Check out this article on DevCentra...

#