secp256k1_ecdsa_sign: document the low S values by ...

Technical: Confidential Transactions and Their Implementation Tradeoffs

As requested by estradata here: https://old.reddit.com/Bitcoin/comments/iylou9/what_are_some_of_the_latest_innovations_in_the/g6heez1/
It is a general issue that crops up at the extremes of cryptography, with quantum breaks being just one of the extremes of (classical) cryptography.

Computational vs Information-Theoretic

The dichotomy is between computationally infeasible vs informationally-theoretic infeasible. Basically:
Quantum breaks represent a possible reduction in computational infeasibility of certain things, but not information-theoretic infeasibility.
For example, suppose you want to know what 256-bit preimages map to 256-bit hashes. In theory, you just need to build a table with 2256 entries and start from 0x0000000000000000000000000000000000000000000000000000000000000000 and so on. This is computationally infeasible, but not information-theoretic infeasible.
However, suppose you want to know what preimages, of any size, map to 256-bit hashes. Since the preimages can be of any size, after finishing with 256-bit preimages, you have to proceed to 257-bit preimages. And so on. And there is no size limit, so you will literally never finish. Even if you lived forever, you would not complete it. This is information-theoretic infeasible.

Commitments

How does this relate to confidential transactions? Basically, every confidential transaction simply hides the value behind a homomorphic commitment. What is a homomorphic commitment? Okay, let's start with commitments. A commitment is something which lets you hide something, and later reveal what you hid. Until you reveal it, even if somebody has access to the commitment, they cannot reverse it to find out what you hid. This is called the "hiding property" of commitments. However, when you do reveal it (or "open the commitment"), then you cannot replace what you hid with some other thing. This is called the "binding property" of commitments.
For example, a hash of a preimage is a commitment. Suppose I want to commit to something. For example, I want to show that I can predict the future using the energy of a spare galaxy I have in my pocket. I can hide that something by hashing a description of the future. Then I can give the hash to you. You still cannot learn the future, because it's just a hash, and you can't reverse the hash ("hiding"). But suppose the future event occurs. I can reveal that I did, in fact, know the future. So I give you the description, and you hash it and compare it to the hash I gave earlier. Because of preimage resistance, I cannot retroactively change what I hid in the hash, so what I gave must have been known to me at the time that I gave you the commitment i..e. hash ("binding").

Homomorphic Commitments

A homomorphic commitment simply means that if I can do certain operations on preimages of the commitment scheme, there are certain operations on the commitments that would create similar ("homo") changes ("morphic") to the commitments. For example, suppose I have a magical function h() which is a homomorphic commitment scheme. It can hide very large (near 256-bit) numbers. Then if h() is homomorphic, there may be certain operations on numbers behind the h() that have homomorphisms after the h(). For example, I might have an operation <+> that is homomorphic in h() on +, or in other words, if I have two large numbers a and b, then h(a + b) = h(a) <+> h(b). + and <+> are different operations, but they are homomorphic to each other.
For example, elliptic curve scalars and points have homomorphic operations. Scalars (private keys) are "just" very large near-256-bit numbers, while points are a scalar times a standard generator point G. Elliptic curve operations exist where there is a <+> between points that is homomorphic on standard + on scalars, and a <*> between a scalar and a point that is homomorphic on standard * multiplication on scalars.
For example, suppose I have two large scalars a and b. I can use elliptic curve points as a commitment scheme: I can take a <*> G to generate a point A. It is hiding since nobody can learn what a is unless I reveal it (a and A can be used in standard ECDSA private-public key cryptography, with the scalar a as the private key and the point A as the public key, and the a cannot be derived even if somebody else knows A). Thus, it is hiding. At the same time, for a particular point A and standard generator point G, there is only one possible scalar a which when "multiplied" with G yields A. So scalars and elliptic curve points are a commitment scheme, with both hiding and binding properties.
Now, as mentioned there is a <+> operation on points that is homomorphic to the + operation on corresponding scalars. For example, suppose there are two scalars a and b. I can compute (a + b) <*> G to generate a particular point. But even if I don't know scalars a and b, but I do know points A = a <*> G and B = b <*> G, then I can use A <+> B to derive (a + b) <*> G (or equivalently, (a <*> G) <+> (b <*> G) == (a + b) <*> G). This makes points a homomorphic commitment scheme on scalars.

Confidential Transactions: A Sketch

This is useful since we can easily use the near-256-bit scalars in SECP256K1 elliptic curves to easily represent values in a monetary system, and hide those values by using a homomorphic commitment scheme. We can use the hiding property to prevent people from learning the values of the money we are sending and receiving.
Now, in a proper cryptocurrency, a normal, non-coinbase transaction does not create or destroy coins: the values of the input coins are equal to the value of the output coins. We can use a homomorphic commitment scheme. Suppose I have a transaction that consumes an input value a and creates two output values b and c. That is, a = b + c, i.e. the sum of all inputs a equals the sum of all outputs b and c. But remember, with a homomorphic commitment scheme like elliptic curve points, there exists a <+> operation on points that is homomorphic to the ordinary school-arithmetic + addition on large numbers. So, confidential transactions can use points a <*> G as input, and points b <*> G and c <*> G as output, and we can easily prove that a <*> G = (b <*> G) <+> (c <*> G) if a = b + c, without revealing a, b, or c to anyone.

Pedersen Commitments

Actually, we cannot just use a <*> G as a commitment scheme in practice. Remember, Bitcoin has a cap on the number of satoshis ever to be created, and it's less than 253 satoshis, which is fairly trivial. I can easily compute all values of a <*> G for all values of a from 0 to 253 and know which a <*> G corresponds to which actual amount a. So in confidential transactions, we cannot naively use a <*> G commitments, we need Pedersen commitments.
If you know what a "salt" is, then Pedersen commitments are fairly obvious. A "salt" is something you add to e.g. a password so that the hash of the password is much harder to attack. Humans are idiots and when asked to generate passwords, will output a password that takes less than 230 possibilities, which is fairly easy to grind. So what you do is that you "salt" a password by prepending a random string to it. You then hash the random string + password, and store the random string --- the salt --- together with the hash in your database. Then when somebody logs in, you take the password, prepend the salt, hash, and check if the hash matches with the in-database hash, and you let them log in. Now, with a hash, even if somebody copies your password database, the can't get the password. They're hashed. But with a salt, even techniques like rainbow tables make a hacker's life even harder. They can't hash a possible password and check every hash in your db for something that matches. Instead, if they get a possible password, they have to prepend each salt, hash, then compare. That greatly increases the computational needs of a hacker, which is why salts are good.
What a Pedersen commitment is, is a point a <*> H, where a is the actual value you commit to, plus <+> another point r <*> G. H here is a second standard generator point, different from G. The r is the salt in the Pedersen commitment. It makes it so that even if you show (a <*> H) <+> (r <*> G) to somebody, they can't grind all possible values of a and try to match it with your point --- they also have to grind r (just as with the password-salt example above). And r is much larger, it can be a true near-256-bit number that is the range of scalars in SECP256K1, whereas a is constrained to "reasonable" numbers of satoshi, which cannot exceed 21 million Bitcoins.
Now, in order to validate a transaction with input a and outputs b and c, you only have to prove a = b + c. Suppose we are hiding those amounts using Pedersen commitments. You have an input of amount a, and you know a and r. The blockchain has an amount (a <*> H) <+> (r <*> G). In order to create the two outputs b and c, you just have to create two new r scalars such that r = r[0] + r[1]. This is trivial, you just select a new random r[0] and then compute r[1] = r - r[0], it's just basic algebra.
Then you create a transaction consuming the input (a <*> H) <+> (r <*> G) and outputs (b <*> H) <+> (r[0] <*> G) and (c <*> H) <+> (r[1] <*> G). You know that a = b + c, and r = r[0] + r[1], while fullnodes around the world, who don't know any of the amounts or scalars involved, can just take the points (a <*> H) <+> (r <*> G) and see if it equals (b <*> H) <+> (r[0] <*> G) <+> (c <*> H) <+> (r[1] <*> G). That is all that fullnodes have to validate, they just need to perform <+> operations on points and comparison on points, and from there they validate transactions, all without knowing the actual values involved.

Computational Binding, Information-Theoretic Hiding

Like all commitments, Pedersen Commitments are binding and hiding.
However, there are really two kinds of commitments:
What does this mean? It's just a measure of how "impossible" binding vs hiding is. Pedersen commitments are computationally binding, meaning that in theory, a user of this commitment with arbitrary time and space and energy can, in theory, replace the amount with something else. However, it is information-theoretic hiding, meaning an attacker with arbitrary time and space and energy cannot figure out exactly what got hidden behind the commitment.
But why?
Now, we have been using a and a <*> G as private keys and public keys in ECDSA and Schnorr. There is an operation <*> on a scalar and a point that generates another point, but we cannot "revrese" this operation. For example, even if I know A, and know that A = a <*> G, but do not know a, I cannot derive a --- there is no operation between A G that lets me know a.
Actually there is: I "just" need to have so much time, space, and energy that I just start counting a from 0 to 2256 and find which a results in A = a <*> G. This is a computational limit: I don't have a spare universe in my back pocket I can use to do all those computations.
Now, replace a with h and A with H. Remember that Pedersen commitments use a "second" standard generator point. The generator points G and H are "not really special" --- they are just random points on the curve that we selected and standardized. There is no operation H G such that I can learn h where H = h <*> G, though if I happen to have a spare universe in my back pocket I can "just" brute force it.
Suppose I do have a spare universe in my back pocket, and learn h = H G such that H = h <*> G. What can I do in Pedersen commitments?
Well, I have an amount a that is committed to by (a <*> H) <+> (r <*> G). But I happen to know h! Suppose I want to double my money a without involving Elon Musk. Then:
That is what we mean by computationally binding: if I can compute h such that H = h <*> G, then I can find another number which opens the same commitment. And of course I'd make sure that number is much larger than what I originally had in that address!
Now, the reason why it is "only" computationally binding is that it is information-theoretically hiding. Suppose somebody knows h, but has no money in the cryptocurrency. All they see are points. They can try to find what the original amounts are, but because any amount can be mapped to "the same" point with knowledge of h (e.g. in the above, a and 2 * a got mapped to the same point by "just" replacing the salt r with r - a * h; this can be done for 3 * a, 4 * a etc.), they cannot learn historical amounts --- the a in historical amounts could be anything.
The drawback, though, is that --- as seen above --- arbitrary inflation is now introduced once somebody knows h. They can multiply their money by any arbitrary factor with knowledge of h.
It is impossible to have both perfect hiding (i.e. historical amounts remain hidden even after a computational break) and perfect binding (i.e. you can't later open the commitment to a different, much larger, amount).
Pedersen commitments just happen to have perfect hiding, but only computationally-infeasible binding. This means they allow hiding historical values, but in case of anything that allows better computational power --- including but not limited to quantum breaks --- they allow arbitrary inflation.

Changing The Tradeoffs with ElGamal Commitments

An ElGamal commitment is just a Pedersen commitment, but with the point r <*> G also stored in a separate section of the transaction.
This commits the r, and fixes it to a specific value. This prevents me from opening my (a <*> H) <+> (r <*> G) as ((2 * a) <*> H) <+> ((r - a * h) <*> G), because the (r - a * h) would not match the r <*> G sitting in a separate section of the transaction. This forces me to be bound to that specific value, and no amount of computation power will let me escape --- it is information-theoretically binding i.e. perfectly binding.
But that is now computationally hiding. An evil surveillor with arbitrary time and space can focus on the r <*> G sitting in a separate section of the transaction, and grind r from 0 to 2256 to determine what r matches that point. Then from there, they can negate r to get (-r) <*> G and add it to the (a <*> H) <+> (r <*> G) to get a <*> H, and then grind that to determine the value a. With massive increases in computational ability --- including but not limited to quantum breaks --- an evil surveillor can see all the historical amounts of confidential transactions.

Conclusion

This is the source of the tradeoff: either you design confidential transactions so in case of a quantum break, historical transactions continue to hide their amounts, but inflation of the money is now unavoidable, OR you make the money supply sacrosanct, but you potentially sacrifice amount hiding in case of some break, including but not limited to quantum breaks.
submitted by almkglor to Bitcoin [link] [comments]

Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:

Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:

Gains x Streamr AMA Recap

https://preview.redd.it/o74jlxia8im51.png?width=1236&format=png&auto=webp&s=93eb37a3c9ed31dc3bf31c91295c6ee32e1582be
Thanks to everyone in our community who attended the GAINS AMA yesterday with, Shiv Malik. We were excited to see that so many people attended and gladly overwhelmed by the amount of questions we got from you on Twitter and Telegram. We decided to do a little recap of the session for anyone who missed it, and to archive some points we haven’t previously discussed with our community. Happy reading and thanks to Alexandre and Henry for having us on their channel!
What is the project about in a few simple sentences?
At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that one of the things I’m most excited about are Data Unions. With Data Unions anyone can join the data economy and start monetizing the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps.
Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand?
The best example of a Data Union is the first one that has been built out of our stack. It's called Swash and it's a browser plugin.
You can download it here: http://swashapp.io/
And basically it helps you monetize the data you already generate (day in day out) as you browse the web. It's the sort of data that Google already knows about you. But this way, with Swash, you can actually monetize it yourself. The more people that join the union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers.
Very interesting. What stage is the project/product at? It's live, right?
Yes. It's live. And the Data Union framework is in public beta. The Network is on course to be fully decentralized at some point next year.
How much can a regular person browsing the Internet expect to make for example?
So that's a great question. The answer is no one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn't available in high quality. So With a union of a few million people, everyone could be getting 20-50 dollars a year. But it'll take a few years at least to realise that growth. Of course Swash is just one data union amongst many possible others (which are now starting to get built out on our platform!)
With Swash, I believe they now have 3,000 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth.
I assume the data is anonymized btw?
Yes. And there in fact a few privacy protecting tools Swash supplys to its users.
How does Swash compare to Brave?
So Brave really is about consent for people's attention and getting paid for that. They don't sell your data as such.
Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT.
Of course it's Streamr that is powering Swash. And we're looking at powering other DUs - say for example mobile applications.
The holy grail might be having already existing apps and platforms out there, integrating DU tech into their apps so people can consent (or not) to having their data sold - and then getting a cut of that revenue when it does sell.
The other thing to recognise is that the big tech companies monopolise data on a vast scale - data that we of course produce for them. That is stifling innovation.
Take for example a competitor map app. To effectively compete with Google maps or Waze, they need millions of users feeding real time data into it.
Without that - it's like Google maps used to be - static and a bit useless.
Right, so how do you convince these big tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn't be able to monetize data as well on their end if it becomes more available through an aggregation of individuals?
If a map application does manage to scale to that level then inevitably Google buys them out - that's what happened with Waze.
But if you have a data union which bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly.
We're currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetization. And that's what even more exciting. Just be explicit with users - do you want to sell your data? Okay, if yes, then which data point do you want to sell.
Then the mobile network operator (like T-mobile for example) then organises the sale of the data of those who consent and everyone gets a cut.
Streamr - in this example provides the backend to port and bundle the data, and also the token and payment rail for the payments.
So for big companies (mobile operators in this case), it's less logistics, handing over the implementation to you, and simply taking a cut?
It's a vision that we'll be able to talk more about more concretely in a few weeks time 😁
Compared to having to make sense of that data themselves (in the past) and selling it themselves
Sort of.
We provide the backened to port the data and the template smart contracts to distribute the payments.
They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world.
(Through our sister company TX, we also help build out the applications for them and ensure a smooth integration).
The other thing to add is that the reason why this vision is working, is that the current data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to IoS14 to make third party data sharing more explicit for users.
All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through this method - from overt, rich, consent from the consumer in return for a cut of the earnings.
A couple of questions to get a better feel about Streamr as a whole now and where it came from. How many people are in the team? For how long have you been working on Streamr?
We are around 35 people with one office in Zug, Switzerland and another one in Helsinki. But there are team members all over the globe, we’ve people in the US, Spain, the UK, Germany, Poland, Australia and Singapore. I joined Streamr back in 2017 during the ICO craze (but not for that reason!)
And did you raise funds so far? If so, how did you handle them? Are you planning to do any future raises?
We did an ICO back in Sept/Oct 2017 in which we raised around 30 Millions CHF. The funds give us enough runway for around five/six years to finalize our roadmap. We’ve also simultaneously opened up a sister company consultancy business, TX which helps enterprise clients implementing the Streamr stack. We've got no more plans to raise more!
What is the token use case? How did you make sure it captures the value of the ecosystem you're building
The token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. ( we haven't talked much about the P2P network but it's our project's secret sauce).
The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with Blockscience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs.
But if you want to summate the Network in a sentence or two - imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it's realtime data streams.
That of course means it's really well suited for the IoT economy.
Well, let's continue with questions from Twitter and this one comes at the perfect time. Can Streamr Network be used to transfer data from IOT devices? Is the network bandwidth sufficient? How is it possible to monetize the received data from a huge number of IOT devices? From u/ EgorCypto
Yes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed - the Streamr team just recently did extensive research to find out how well the network scales.
The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we're super happy with those results.
Here's a link to the paper:
https://medium.com/streamrblog/streamr-network-performance-and-scalability-whitepaper-adb461edd002
While we're on the technical side, second question from Twitter: Can you be sure that valuable data is safe and not shared with service providers? Are you using any encryption methods? From u/ CryptoMatvey
Yes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release - our last milestone on the roadmap - when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data.
If BTW - you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum).
Last question from Twitter, less technical now :) In their AMA ad, they say that Streamr has three unions, Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand? From u/ alej_pacedo
So yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines.
WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivized to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable.
So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products.
(I think they had a call this week about a use case involving cows 😂)
I recall late last year, that the Streamr Data Union framework was launched into private beta, now public beta was recently released. What are the differences? Any added new features? By u/ Idee02
The main difference will be that the DU 2.0 release will be more reliable and also more transparent since the sidechain we are using for micropayments is also now based on blockchain consensus (PoA).
Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?by u/ Andromedamin
We're all about what's under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all!
As a project how do you consider the progress of the project to fully developed (in % of progress plz) by u/ Hash2T
We're about 60% through I reckon!
What tools does Streamr offer developers so that they can create their own DApps and monetize data?What is Streamr Architecture? How do the Ethereum blockchain and the Streamr network and Streamr Core applications interact? By u/ CryptoDurden
We'll be releasing the Data UNion framework in a few weeks from now and I think DApp builders will be impressed with what they find.
We all know that Blockchain has many disadvantages as well,
So why did Streamr choose blockchain as a combination for its technology?
What's your plan to merge Blockchain with your technologies to make it safer and more convenient for your users? By u/ noonecanstopme
So we're not a blockchain ourselves - that's important to note. The P2P network only uses BC tech for the payments. Why on earth for example would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain.
So we think we got the mix right there.
What were the requirements needed for node setup ? by u/ John097
Good q - we're still working on that but those specs will be out in the next release.
How does the STREAMR team ensure good data is entered into the blockchain by participants? By u/ kartika84
Another great Q there! From the product buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network - if that is what you also mean - is all about getting the architecture right. In a decentralised network, that's not easy as data points in streams have to arrive in the right order. It's one of the biggest challenges but we think we're solving it in a really decentralised way.
What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case? By u/ JP_Morgan_Chase
There are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace.
Regarding security and legality, how does STREAMR guarantee that the data uploaded by a given user belongs to him and he can monetize and capitalize on it? By u/ kherrera22
So that's a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this but it's a long one!
Okay thank you all for all of those!
The AMA took place in the GAINS Telegram group 10/09/20. Answers by Shiv Malik.
submitted by thamilton5 to streamr [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to btc [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to Bitcoin [link] [comments]

Help me code it!

Hi everyone, i am learning about Python and it's quite hard with me. I want to calculate Public key from Private key with ECC. I have the code from Github, transform it to Python 3.0 and it does not work:
# Super simple Elliptic Curve Presentation. No imported libraries, wrappers, nothing. # For educational purposes only. Remember to use Python 2.7.6 or lower. You'll need to make changes for Python 3. # Below are the public specs for Bitcoin's curve - the secp256k1 import binascii Pcurve = 2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 -1 # The proven prime N=0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 # Number of points in the field Acurve = 0; Bcurve = 7 # These two defines the elliptic curve. y^2 = x^3 + Acurve * x + Bcurve Gx = 55066263022277343669578718895168534326250603453777594175500187360389116729240 Gy = 32670510020758816978083085130507043184471273380659243275938904335757337482424 GPoint = (Gx,Gy) # This is our generator point. Trillions of dif ones possible #Individual Transaction/Personal Information privKey = 0xA0DC65FFCA799873CBEA0AC274015B9526505DAAAED385155425F7337704883E #replace with any private key def modinv(a,n=Pcurve): #Extended Euclidean Algorithm/'division' in elliptic curves lm, hm = 1,0 low, high = a%n,n while low > 1: ratio = high/low nm, new = hm-lm*ratio, high-low*ratio lm, low, hm, high = nm, new, lm, low return lm % n def ECadd(a,b): # Not true addition, invented for EC. Could have been called anything. LamAdd = ((b[1]-a[1]) * modinv(b[0]-a[0],Pcurve)) % Pcurve x = (LamAdd*LamAdd-a[0]-b[0]) % Pcurve y = (LamAdd*(a[0]-x)-a[1]) % Pcurve return (x,y) def ECdouble(a): # This is called point doubling, also invented for EC. Lam = ((3*a[0]*a[0]+Acurve) * modinv((2*a[1]),Pcurve)) % Pcurve x = (Lam*Lam-2*a[0]) % Pcurve y = (Lam*(a[0]-x)-a[1]) % Pcurve return (x,y) def EccMultiply(GenPoint,ScalarHex): #Double & add. Not true multiplication if ScalarHex == 0 or ScalarHex >= N: raise Exception("Invalid ScalaPrivate Key") ScalarBin = str(bin(ScalarHex))[2:]; #print(ScalarBin); Q=GenPoint for i in range (1,len(ScalarBin)): # This is invented EC multiplication. Q=ECdouble(Q); print(("DUB", Q[0])); print(i) if ScalarBin[i] == "1": Q=ECadd(Q,GenPoint); print(("ADD", Q[0])); print() return (Q) PublicKey = EccMultiply(GPoint,privKey); print(); print("******* Public Key Generation *********"); print() print("the private key:"); print((hex(privKey))); print() print("the uncompressed public key (not address):"); print(PublicKey); print() print("the uncompressed public key (HEX):"); print(("04" + "%064x" % PublicKey[0] + "%064x" % PublicKey[1])); print(); print("the official Public Key - compressed:"); if PublicKey[1] % 2 == 1: # If the Y value for the Public Key is odd. print(("03"+str(hex(PublicKey[0])[2:-1]).zfill(64))) else: # Or else, if the Y value is even. print(("02"+str(hex(PublicKey[0])[2:-1]).zfill(64))) 
submitted by Phuc_Jackson to Bitcoin [link] [comments]

In response to ProofOfResearch's misleading article on NEO.

In response to ProofOfResearch's misleading article on NEO.
Yesterday, I was made aware of an article published by ProofOfResearch almost entirely based on a Reddit post that I had written a few months ago. About a month ago I was contacted by Randomshortdude (supposedly ProofOfResearch himself) asking for a permission to use the excerpts from the aforementioned post in his write-up about NEO. As an avid proponent of inclusivity and transparency, I gave a permission to use the contents of my post (the screenshots of the entire conversation will be added below), providing him with links to the Github repos and updating him on the fixes and improvements that have happened since the post had been published. Unknowingly, I continued to work on my projects while my post was being molded into a foundation for an entirely misleading and unfathomably unscientific article.
This post is going to consist of a list of excerpts from the article and the corresponding refutal for each of the listed excerpts.
"This is a semantic issue (example: $BTC having a 1 MB block size + 10 min block time limits TPS; no way around that) meaning that this is immutable"
Bitcoin doesn't have a 10 minute block time limit coded into the platform. The 10 minute average block production time is obtained via a difficulty adjustment formula that readjusts the difficulty of the underlying HashCash PoW algorithm every 2015 blocks (not 2016 due to a bug that was never fixed) based on the average block production time of the previous 2015 blocks.
"I’m not sure it’s even possible to change the digital signature of a protocol without a major hard fork, and there isn’t an alternative digital signature (that I think of), that would make this any more secure."
This excerpt is written in a reference to Point 2 of my Reddit post that criticizes the use of multisigs as a proof of the fact that a quorum (at least 2f + 1) replicas had signed the block hash. The use of multisig instead of signature batching via Schnorr's signatures doesn't affect the security of the nodes or the cryptographic standards used, however, the security of the network as a whole can be compromised due to the decreased number of full/light nodes operating increasing the likelihood of a spam attack being able to degrade the performance of the platform. Apart from that, a digital signature algorithm of the platform can be easily changed by adjusting the versioning of the block and transaction structures.
"Therefore, the consensus algo itself would need to be changed to amend this issue."
The consensus protocol works independently from the cryptographic standards of the platform, so a switch to a different elliptic curve or digital signature algorithm will have zero impact on the consensus algorithm.
"This, in itself, might be what stops $NEO from ever being able to truly scale."
While digital signature algorithms can vary in signing and verification speeds, the difference in the performance of the most popular signature schemes is small enough (except for BLS) to be considered to have a negligible impact on the efficiency of the consensus. As long the nodes are running on an efficient implementation, average network throughput is going continue to be the main bottleneck of the platform.
"Digital signatures are somewhat complex, but not incomprehensible if you really take the time to sit down and understand it. Once again though, it’s going to rely on an understanding of blockchain tech as well to know how this impacts the signing feature of a TX itself as well as pub key creation"
Digital signature algorithms play no role in public key creation as a public key is created simply by multiplying a 256-bit entropy (private key) by a generator (G).


A screenshot of a tweet used in the article.
Baffling. ed25519 DSA does not impact the efficiency of BFT and "blockchain" (whatever the hell that means in this context) as a result. Please also note that NEO does not use ed25519. NEO uses secp256r1 (as opposed to secp256k1 used by Bitcoin, which is a Koblitz curve) which is a NIST-recommended elliptic curve.
"Regular PoW algos are already designed to be Byzantine fault-tolerant already"
While being technically correct, the author dismisses the fact that BFT algorithms offer Byzantine fault-tolerance under rigid mathematical assumptions, in contrast to PoW algorithms which offer Byzantine fault-tolerance under probabilistic assumptions.
"Byzantine Fault Tolerance is not an issue though. It’s actually really useful but for private blockchains."
A common misconception about the use of BFT algorithms in "public" (the author meant permissioned/permission-less) blockchains. BFT algorithms are only required to retain the permissioned status during the agreement phase (meaning that the new candidates will have to wait until the next consensus round to be able to participate in the consensus) and can have a round robin algorithm implemented to select the next pool of validators.
"Of course, in a decentralized protocol — something like that is very hard to achieve."
The research paper quoted in the article examines the efficiency of Castro and Liskov's PBFT (Practical Byzantine Fault Tolerance) algorithm which is dissimilar from dBFT because PBFT doesn't require a primary change after every consensus round, which is impacts the performance in a decentralized network.
“At the other extreme, Hyperledger uses the classic PBFT protocol, which is communication bound: O(N²) where N is the number of nodes. PBFT can tolerate fewer than N/3 failures, and works in three phases in which nodes broadcast messages to each other. First, the pre-prepare phase selects a leader which chooses a value to commit. Next, the prepare phase broadcasts the value to be validated. Finally, the commit phase waits for more than two third of the nodes to confirm before announcing that the value is committed. PBFT has been shown to achieve liveness and safety properties in a partially asynchronous model [11], thus, unlike PoW, once the block is appended it is confirmed immediately. It can tolerate more failures than PoW (which is vulnerable to 25% attacks [26]). However, PBFT assumes that node identities are known, therefore it can only work in the permissioned settings. Additionally, the protocol is unlikely to be able to scale to the network size of Ethereum beacuse of its communication overhead.”
This statement will require a separate post to examine the real-world "permission-lessness" of PoW chains.
"NEO codebase is virtually abandoned."
neo-sharp? neo-go?
"This is purportedly in favor of $NEO 3.0, but there’s no GitHub for $NEO 3.0 (at least not any that I’ve found)"
https://github.com/neo-project/neo/pull/288.
"The idea of it being able to handle 1000 TPS has been thoroughly debunked and it is virtually impossible (probably entirely impossible) for $NEO to create a public blockchain based on DBFT (essentially POS+BFT semantically), that keeps the same encryption signatures (which are probably the only ones that will reliably serve the purpose of crypto where collision resistance must be all but a guarantee)."
dBFT cannot be equated to PoS + BFT as none of those are delegate-centered protocols. How was 1000 TPS thoroughly debunked? With the neo-sharp implementation and Akka being launched, I don't see a reason for dBFT to not be able to surpass 1,000 TPS during peak loads (not during sustained loads though). The excerpt about the collision resistance of "encryption signatures" (?) makes no sense to me.
Here are the promised screenshots of our conversation:
Screenshot 1

Screenshot 2

Screenshot 3
P.S. It is sad to see the so-called "researchers" attracting a mass following despite being clueless about the technology they are trying to review.

submitted by toghrulmaharramov to NEO [link] [comments]

I would like to share with you my current set of beliefs regarding Bitcoin.

I would like to share with you my current set of beliefs regarding Bitcoin. It’s up to you to believe it, take it at face value, refute it or discuss it below.
submitted by wisequote to btc [link] [comments]

The core concepts of DTube's new blockchain

Dear Reddit community,
Following our announcement for DTube v0.9, I have received countless questions about the new blockchain part, avalon. First I want to make it clear, that it would have been utterly impossible to build this on STEEM, even with the centralized SCOT/Tribes that weren't available when I started working on this. This will become much clearer as you read through the whole wall of text and understand the novelties.
SteemPeak says this is a 25 minutes read, but if you are truly interested in the concept of a social blockchain, and you believe in its power, I think it will be worth the time!

MOVING FORWARD

I'm a long time member of STEEM, with tens of thousands of staked STEEM for 2 years+. I understand the instinctive fear from the other members of the community when they see a new crypto project coming out. We've had two recent examples recently with the VOICE and LIBRA annoucements, being either hated or ignored. When you are invested morally, and financially, when you see competitors popping up, it's normal to be afraid.
But we should remember competition is healthy, and learn from what these projects are doing and how it will influence us. Instead, by reacting the way STEEM reacts, we are putting our heads in the sand and failing to adapt. I currently see STEEM like the "North Korea of blockchains", trying to do everything better than other blockchains, while being #80 on coinmarketcap and slowly but surely losing positions over the months.
When DLive left and revealed their own blockchain, it really got me thinking about why they did it. The way they did it was really scummy and flawed, but I concluded that in the end it was a good choice for them to try to develop their activity, while others waited for SMTs. Sadly, when I tried their new product, I was disappointed, they had botched it. It's purely a donation system, no proof of brain... And the ultra-majority of the existing supply is controlled by them, alongside many other 'anti-decentralization' features. It's like they had learnt nothing from their STEEM experience at all...
STEEM was still the only blockchain able to distribute crypto-currency via social interactions (and no, 'donations' are not social interactions, they are monetary transfers; bitcoin can do it too). It is the killer feature we need. Years of negligence or greed from the witnesses/developers about the economic balance of STEEM is what broke this killer feature. Even when proposing economical changes (which are actually getting through finally in HF21), the discussions have always been centered around modifying the existing model (changing the curve, changing the split, etc), instead of developing a new one.
You never change things by fighting the existing reality.
To change something, build a new model that makes the existing model obsolete.
What if I built a new model for proof of brain distribution from the ground up? I first tried playing with STEEM clones, I played with EOS contracts too. Both systems couldn't do the concepts I wanted to integrate for DTube, unless I did a major refactor of tens of thousands of lines of code I had never worked with before. Making a new blockchain felt like a lighter task, and more fun too.
Before even starting, I had a good idea of the concepts I'd love to implement. Most of these bullet points stemmed from observations of what happened here on STEEM in the past, and what I considered weaknesses for d.tube's growth.

NO POWER-UP

The first concept I wanted to implement deep down the core of how a DPOS chain works, is that I didn't want the token to be staked, at all (i.e. no 'powering up'). The cons of staking for a decentralized social platform are obvious: * complexity for the users with the double token system. * difficulty to onboard people as they need to freeze their money, akin to a pyramid scheme.
The only good thing about staking is how it can fill your bandwidth and your voting power when you power-up, so you don't need to wait for it to grow to start transacting. In a fully-liquid system, your account ressources start at 0% and new users will need to wait for it to grow before they can start transacting. I don't think that's a big issue.
That meant that witness elections had to be run out of the liquid stake. Could it be done? Was it safe for the network? Can we update the cumulative votes for witnesses without rounding issues? Even when the money flows between accounts freely?
Well I now believe it is entirely possible and safe, under certain conditions. The incentive for top witnesses to keep on running the chain is still present even if the stake is liquid. With a bit of discrete mathematics, it's easy to have a perfectly deterministic algorithm to run a decentralized election based off liquid stake, it's just going to be more dynamic as the funds and the witness votes can move around much faster.

NO EARLY USER ADVANTAGE

STEEM has had multiple events that influenced the distribution in a bad way. The most obvious one is the inflation settings. One day it was hella-inflationary, then suddently hard fork 16 it wasn't anymore. Another major one, is the non-linear rewards that ran for a long time, which created a huge early-user advantage that we can still feel today.
I liked linear rewards, it's what gives minnows their best chance while staying sybil-resistant. I just needed Avalon's inflation to be smart. Not hyper-inflationary like The key metric to consider for this issue, is the number of tokens distributed per user per day. If this metric goes down, then the incentive for staying on the network and playing the game, goes down everyday. You feel like you're making less and less from your efforts. If this metric goes up, the number of printed tokens goes up and the token is hyper-inflationary and holding it feels really bad if you aren't actively earning from the inflation by playing the game.
Avalon ensures that the number of printed tokens is proportional to the number of users with active stake. If more users come in, avalon prints more tokens, if users cash-out and stop transacting, the inflation goes down. This ensures that earning 1 DTC will be about as hard today, tomorrow, next month or next year, no matter how many people have registered or left d.tube, and no matter what happens on the markets.

NO LIMIT TO MY VOTING POWER

Another big issue that most steemians don't really know about, but that is really detrimental to STEEM, is how the voting power mana bar works. I guess having to manage a 2M SP delegation for @dtube really convinced me of this one.
When your mana bar is full at 100%, you lose out the potential power generation, and rewards coming from it. And it only takes 5 days to go from 0% to 100%. A lot of people have very valid reasons to be offline for 5 days+, they shouldn't be punished so hard. This is why all most big stake holders make sure to always spend some of their voting power on a daily basis. And this is why minnows or smaller holders miss out on tons of curation rewards, unless they delegate to a bidbot or join some curation guild... meh. I guess a lot of people would rather just cash-out and don't mind the trouble of having to optimize their stake.
So why is it even a mana bar? Why can't it grow forever? Well, everything in a computer has to have a limit, but why is this limit proportional to my stake? While I totally understand the purpose of making the bandwidth limited and forcing big stake holders to waste it, I think it's totally unneeded and inadapted for the voting power. As long as the growth of the VP is proportional to the stake, the system stays sybil-resistant, and there could technically be no limit at all if it wasn't for the fact that this is ran in a computer where numbers have a limited number of bits.
On Avalon, I made it so that your voting power grows virtually indefinitely, or at least I don't think anyone will ever reach the current limit of Number.MAX_SAFE_INTEGER: 9007199254740991 or about 9 Peta VP. If you go inactive for 6 months on an account with some DTCs, when you come back you will have 6 months worth of power generation to spend, turning you into a whale, at least for a few votes.
Another awkward limit on STEEM is how a 100% vote spends only 2% of your power. Not only STEEM forces you to be active on a daily basis, you also need to do a minimum of 10 votes / day to optimize your earnings. On Avalon, you can use 100% of your stored voting power in a single mega-vote if you wish, it's up to you.

A NEW PROOF-OF-BRAIN

No Author rewards

People should vote with the intent of getting a reward from it. If 75% of the value forcibly goes to the author, it's hard to expect a good return from curation. Steem is currently basically a complex donation platform. No one wants to donate when they vote, no matter what they will say, and no matter how much vote-trading, self-voting or bid-botting happens.
So in order to keep a system where money is printed when votes happen, if we cannot use the username of the author to distribute rewards, the only possibility left is to use the list of previous voters aka "Curation rewards". The 25% interesting part of STEEM, that has totally be shadowed by the author rewards for too long.

Downvote rewards

STEEM has always suffered from the issue that the downvote button is unused, or when it's used, it's mostly for evil. This comes from the fact that in STEEM's model, downvotes are not eligible for any rewards. Even if they were, your downvote would be lowering the final payout of the content, and your own curation rewards...
I wanted Avalon's downvotes to be completely symmetric to the upvotes. That means if we revert all the votes (upvotes become downvotes and vice versa), the content should still distribute the same amount of tokens to the same people, at the same time.

No payment windows

Steem has a system of payments windows. When you publish a content, it opens a payment window where people can freely upvote or downvote to influence the payout happening 7 days later. This is convenient when you want a system where downvotes lower rewards. Waiting 7 days to collect rewards is also another friction point for new users, some of them might never come back 7 days later to convince themselves that 'it works'. On avalon, when you are part of the winners of curation after a vote, you earn it instantly in your account, 100% liquid and transferable.

Unlimited monetization in time

Indeed, the 7 days monetization limit has been our biggest issue for our video platform since day 8. This incentivized our users to create more frequent, but lesser quality content, as they know that they aren't going to earn anything from the 'long-haul'. Monetization had to be unlimited on DTube, so that even a 2 years old video could be dug up and generate rewards in the far future.
Infinite monetization is possible, but as removing tokens from a balance is impossible, the downvotes cannot remove money from the payout like they do on STEEM. Instead, downvotes print money in the same way upvotes do, downvotes still lower the popularity in the hot and trending and should only rewards other people who downvoted the same content earlier.

New curation rewards algorithm

STEEM's curation algorithm isn't stupid, but I believe it lacks some elegance. The 15 minutes 'band-aid' necessary to prevent curation bots (bots who auto vote as fast as possible on contents of popular authors) that they added proves it. The way is distributes the reward also feels very flat and boring. The rewards for my votes are very predictable, especially if I'm the biggest voter / stake holder for the content. My own vote is paying for my own curation rewards, how stupid is that? If no one elses votes after my big vote despite a popularity boost, it probably means I deserve 0 rewards, no?
I had to try different attempts to find an algorithm yielding interesting results, with infinite monetization, and without obvious ways to exploit it. The final distribution algorithm is more complex than STEEM's curation but it's still pretty simple. When a vote is cast, we calculate the 'popularity' at the time of the vote. The first vote is given a popularity of 0, the next votes are defined by (total_vp_upvotes - total_vp_downvotes) / time_since_1st_vote. Then we look into the list of previous votes, and we remove all votes in the opposite direction (up/down). The we remove all the votes with a higher popularity if its an upvote, or the ones with a lower popularity if its a downvote. The remaining votes in the list are the 'winners'. Finally, akin to STEEM, the amount of tokens generated by the vote will be split between winners proportionally to the voting power spent by each (linear rewards - no advantages for whales) and distributed instantly. Instead of purely using the order of the votes, Avalon distribution is based on when the votes are cast, and each second that passes reduces the popularity of a content, potentially increasing the long-term ROI of the next vote cast on it.
Graph It's possible to chart the popularity that influences the DTC monetary distribution directly in the d.tube UI
This algorithm ensures there are always losers. The last upvoter never earns anything, also the person who upvoted at the highest popularity, and the one who downvoted at the lowest popularity would never receive any rewards for their vote. Just like the last upvoter and last downvoter wouldn't either. All the other ones in the middle may or may not receive anything, depending on how the voting and popularity evolved in time. The one with an obvious advantage, is the first voter who is always counted as 0 popularity. As long as the content stays at a positive popularity, every upvote will earn him rewards. Similarly, being the first downvoter on an overly-popular content could easily earn you 100% rewards on the next downvote that could be from a whale, earning you a fat bonus.
While Avalon doesn't technically have author rewards, the first-voter advantage is strong, and the author has the advantage of always being the first voter, so the author can still earn from his potentially original creations, he just needs to commit some voting power on his own contents to be able to publish.

ONE CHAIN <==> ONE APP

More scalable than shared blockchains

Another issue with generalistic blockchains like ETH/STEEM/EOS/TRX, which are currently hosting dozens of semi-popular web/mobile apps, is the reduced scalability of such shared models. Again, everything in a computer has a limit. For DPOS blockchains, 99%+ of the CPU load of a producing node will be to verify the signatures of the many transactions coming in every 3 seconds. And sadly this fact will not change with time. Even if we had a huge breakthrough on CPU speeds today, we would need to update the cryptographic standards for blockchains to keep them secure. This means it would NOT become easier to scale up the number of verifiable transactions per seconds.
Oh, but we are not there yet you're thinking? Or maybe you think that we'll all be rich if we reach the scalability limits so it doesn't really matter? WRONG
The limit is the number of signature verifications the most expensive CPU on the planet can do. Most blockchains use the secp256k1 curve, including Bitcoin, Ethereum, Steem and now Avalon. It was originally chosen for Bitcoin by Satoshi Nakamoto probably because it's decently quick at verifying signatures, and seems to be backdoor-proof (or else someone is playing a very patient game). Maybe some other curves exist with faster signature verification speed, but it won't be improved many-fold, and will likely require much research, auditing, and time to get adopted considering the security implications.
In 2015 Graphene was created, and Bitshares was completely rewritten. This was able to achieve 100,000 transaction per second on a single machine, and decentralized global stress testing achieved 18,000 transactions per second on a distributed network.
So BitShares/STEEM and other DPOS graphene chains in production can validate at most 18000 txs/sec, so about 1.5 billion transactions per day. EOS, Tendermint, Avalon, LIBRA or any other DPOS blockchain can achieve similar speeds, because there's no planet-killing proof-of-works, and thanks to the leader-based/democratic system that reduces the number of nodes taking part in the consensus.
As a comparison, there are about 4 billion likes per day on instagram, so you can probably double that with the actual uploads, stories and comments, password changes, etc. The load is also likely unstable through the day, probably some hours will go twice as fast as the average. You wouldn't be able to fit Instagram in a blockchain, ever, even with the most scalable blockchain tech on the world's best hardware. You'd need like a dozen of those chains. And instagram is still a growing platform, not as big as Facebook, or YouTube.
So, splitting this limit between many popular apps? Madness! Maybe it's still working right now, but when many different apps reach millions of daily active users plus bots, it won't fit anymore.
Serious projects with a big user base will need to rethink the shared blockchain models like Ethereum, EOS, TRX, etc because the fees in gas or necessary stake required to transact will skyrocket, and the victims will be the hordes of minnows at the bottom of the distribution spectrum.
If we can't run a full instagram on a DPOS blockchain, there is absolutely no point trying to run medium+reddit+insta+fb+yt+wechat+vk+tinder on one. Being able to run half an instagram is already pretty good and probably enough to actually onboard a fair share of the planet. But if we multiply the load by the number of different app concepts available, then it's never gonna scale.
DTube chain is meant for the DTube UI only. Please do not build something unrelated to video connecting to our chain, we would actively do what we can to prevent you from growing. We want this chain to be for video contents only, and the JSON format of the contents should always follow the one used by d.tube.
If you are interested in avalon tech for your project isn't about video, it's strongly suggested to fork the blockchain code and run your own avalon chain with a different origin id, instead of trying to connect your project to dtube's mainnet. If you still want to do it, chain leaders would be forced to actively combat your project as we would consider it as useless noise inside our dedicated blockchain.

Focused governance

Another issue of sharing a blockchain, is the issues coming up with the governance of it. Tons of features enabled by avalon would be controversial to develop on STEEM, because they'd only benefit DTube, and maybe even hurt/break some other projects. At best they'd be put at the bottom of a todo list somewhere. Having a blockchain dedicated to a single project enables it to quickly push updates that are focused on a single product, not dozens of totally different projects.
Many blockchain projects are trying to make decentralized governance true, but this is absolutely not what I am interested in for DTube. Instead, in avalon the 'init' account, or 'master' account, has very strong permissions. In the DTC case, @dtube: * will earn 10% fees from all the inflation * will not have to burn DTCs to create accounts * will be able to do certain types of transactions when others can't * * account creation (during steem exclusivity period) * * transfers (during IEO period) * * transfering voting power and bandwidth ressources (used for easier onboarding)
For example, for our IEO we will setup a mainnet where only @dtube is allowed to transfer funds or vote until the IEO completes and the airdrop happens. This is also what enabled us to create a 'steem-only' registration period on the public testnet for the first month. Only @dtube can create accounts, this way we can enforce a 1 month period where users can port their username for free, without imposters having a chance to steal usernames. Through the hard-forking mechanism, we can enable/disable these limitations and easily evolve the rules and permissions of the blockchain, for example opening monetary transfers at the end of our IEO, or opening account creation once the steem exclusivity ends.
Luckily, avalon is decentralized, and all these parameters (like the @dtube fees, and @dtube permissions) are easily hardforkable by the leaders. @dtube will however be a very strong leader in the chain, as we plan to use our vote to at least keep the #1 producing node for as long as we can.
We reserve the right to 'not follow' an hardfork. For example, it's obvious we wouldn't follow something like reducing our fees to 0% as it would financially endanger the project, and we would rather just continue our official fork on our own and plug d.tube domain and mobile app to it.
On the other end of the spectrum, if other leaders think @dtube is being tyranical one way or another, leaders will always have the option of declining the new hardforks and putting the system on hold, then @dtube will have an issue and will need to compromise or betray the trust of 1/3 of the stake holders, which could reveal costly.
The goal is to have a harmounious, enterprise-level decision making within the top leaders. We expect these leaders to be financially and emotionally connected with the project and act for good. @dtube is to be expected to be the main good actor for the chain, and any permission given to it should be granted with the goal of increasing the DTC marketcap, and nothing else. Leaders and @dtube should be able to keep cooperation high enough to keep the hard-forks focused on the actual issues, and flowing faster than other blockchain projects striving for a totally decentralized governance, a goal they are unlikely to ever achieve.

PERFECT IMBALANCE

A lot of hard-forking

Avalon is easily hard-forkable, and will get hard-forked often, on purpose. No replays will be needed for leaders/exchanges during these hard-forks, just pull the new hardfork code, and restart the node before the hard-fork planned time to stay on the main fork. Why is this so crucial? It's something about game theory.
I have no former proof for this, but I assume a social and financial game akin to the one played on steem since 2016 to be impossible to perfectly balance, even with a thourough dichotomical process. It's probably because of some psychological reason, or maybe just the fact that humans are naturally greedy. Or maybe it's just because of the sheer number of players. They can gang up together, try to counter each others, and find all sorts of creative ideas to earn more and exploit each other. In the end, the slightest change in the rules, can cause drastic gameplay changes. It's a real problem, luckily it's been faced by other people in the past.
Similarly to what popular and succesful massively multiplayer games have achieved, I plan to patch or suggest hard-forks for avalon's mainnet on a bi-monthly basis. The goal of this perfect imbalance concept, is to force players to re-discover their best strategy often. By introducing regular, small, and semi-controlled changes into this chaos, we can fake balance. This will require players to be more adaptative and aware of the changes. This prevents the game from becoming stale and boring for players, while staying fair.

Death to bots

Automators on the other side, will need to re-think their bots, go through the developement and testing phase again, on every new hard-fork. It will be an unfair cat-and-mouse game. Doing small and semi-random changes in frequent hard-forks will be a easy task for the dtube leaders, compared to the work load generated to maintain the bots. In the end, I hope their return on investment to be much lower compared to the bid-bots, up to a point where there will be no automation.
Imagine how different things would have been if SteemIt Inc acted strongly against bid-bots or other forms of automation when they started appearing? Imagine if hard-forks were frequent and they promised to fight bid-bots and their ilk? Who would be crazy enough to make a bid-bot apart from @berniesanders then?
I don't want you to earn DTCs unless you are human. The way you are going to prove you are human, is not by sending a selfie of you with your passport to a 3rd party private company located on the other side of the world. You will just need to adapt to the new rules published every two weeks, and your human brain will do it subconsciously by just playing the voting game and seeing the rewards coming.
All these concepts are aimed at directly improving d.tube, making it more resilient, and scale both technologically and economically. Having control over the full tech stack required to power our dapp will prevent issues like the one we had with the search engine, where we relied too heavily on a 3rd party tool, and that created a 6-months long bug that basically broke 1/3 of the UI.
While d.tube's UI can now totally run independently from any other entity, we kept everything we could working with STEEM, and the user is now able to transparently publish/vote/comment videos on 2 different chains with one click. This way we can keep on leveraging the generalistic good features of STEEM that our new chain doesn't focuses on doing, such as the dollar-pegged token, the author rewards/donation mechanism, the tribes/communities tokens, and simply the extra exposure d.tube users can get from other website (steemit.com, busy.org, partiko, steempeak, etc), which is larger than the number of people using d.tube directly.
The public testnet has been running pretty well for 3 weeks now, with 6000+ accounts registered, and already a dozen of independant nodes popping up and running for leaders. The majority of the videos are cross-posted on both chains and the daily video volume has slightly increased since the update, despite the added friction of the new 'double login' system and several UI bugs.
If you've read this article, I'm hoping to get some reactions from you in the comments section!
Some even more focused articles about avalon are going to pop on my blog in the following weeks, such as how to get a node running and running for leadewitness, so feel free to follow me to get more news and help me reach 10K followers ;)
submitted by nannal to dtube [link] [comments]

Harvesting Cryptodust by Gambling: Winner Takes All

Harvesting Cryptodust by Gambling: Winner Takes All
Cryptocurrencies generated (mined) on mobile devices or personal are so low that no one will feel they are lost. Suppose we "harvest" all these as pool fund for gambling. Then winners can get all prize money like lottery. More people can use it over time. This has great educational value. The operator of this system will make lots of money.
Let's call it Cryptodust for obvious reasons.
Anyone done this before?
Anyone interested in collaboration?
Updates:
https://blog.bitjson.com/just-released-webassembly-version-of-secp256k1-10x-faster-than-javascript-eb3cebe4d411
https://www.reddit.com/Bitcoin/comments/8oiljm/just_released_a_webassembly_version_of_bitcoins/
submitted by wengchunkn to btc [link] [comments]

TERA CRYPTO CURRENCY PROJECT

TERA is an open source and collaborative project. It means everyone can view and eventually modify its source code for hehis own needs. And it also means anyone is welcome to integrate its working community. The Tera community works to develop, deploy and maintain Tera nodes and decentralized applications that are part of the TERA Network.
The TERA technology serves the cryptocurrency concepts, trying to design a modern coins and contracts blockchain application : fast block generation, high transaction throughput and user-friendly application. It was officialy launched on 30th of June 2018 on the bitcointalk forum.
[Yuriy Ivanov](mailto:[email protected]) is the founder and core developer of the project. The Tera community is more familiar with the alias « vtools ».

USER FRIENDLY APPLICATION

In the aim to make this crypto currency project more friendly to end-users, some interesting innovations have been implemented in regards to the first generation of crpyto currency applications. The bitcoin and its thousands of child or fork, required a good level of IT skills in order to manage all the application chain from its own : from miners and its hardware, through stratum servers, proxies, to blockchain nodes. The Tera project intend to go one step further regarding crypto currency features integration into a single application : once installed, an efficient web application is available on localhost on port 8080. Then, any web browser supporting javascript may be able to access this application and to operate fully the Tera node.

MINING A CRYPTO CURRENCY

MINING CONCEPT

The mining activity consist in calling a mathematical procedure we can’t predict the result before we run it. But we intend to obtain a very specific result, which usually consist in a certain number of 0 as the first chars before any random answer. If we found the nonce (a random object) combined with the transaction data and the coin algorithm that produce such result, we’ll have solve a transaction block and we’ll get a reward for that. Thanks to this work, the transaction listed in the block will be added to the blockchain and anyone will be able to check our work. That’s the concept of ‘proof of work’ allowing anyone to replay the mathematical procedure with the nonce discovered by the node that solved the block and to confirm block inclusion into the blockchain.

POLITICAL AND ETHICAL CONSIDERATIONS

The Tera project is young. It will have to face the same problems is facing today the Bitcoin platform :
Any Crypto Currency Project with the goal its money and contracts to be used as any other historical money or service contract has to consider its political and ethical usage. Processes have to be imagined, designed and implemented in order to be able to fight against extortion, corruption and illegal activities threating crypto-currency development.

FAST BLOCK GENERATION AND HIGH THROUGHPUT

CLASSIC CRYPTO CURRENCY FEATURES

wallet, accounts, payments, mining, node settings and utilities, blockchain explorer and utilities…

DECENTRALIZED APP CATALOGUE

d-app : forum, stock exchange, payment plugins for third party platform, …

TECHNOLOGY DEPENDENCIES

Tera is entirely written in Java) over the NodeJS library as functional layer in order to take advantages of a robust and high level library designed to allow large and effective network node management.
The miner part is imported from an external repository and is written in C in order to get the best performances for this module.
Tera is actually officially supported on Linux and Windows.
If you start mining Tera thanks to this article, you can add my account 188131 as advisor to yours. On simple demand I’ll refund you half of the extra coins generated for advisors when you’ll solve blocks (@freddy#8516 on discord).

MINING TERA

Mining Tera has one major design constraint : you need one public IP per Tera node or miner. Yet, you can easily mine it on a computer desktop at home. The mining algorithm has been designed in order to be GPU resistant. In order to mine Tera coin you’ll need a multi-core processor (2 minimum) and some RAM, between 1 and 4GB per process that will mine. The mining reward level depends of the « power » used to solve a block (Top Tera Miners).

COST AND USAGE CONSIDERATIONS

There is two main cost centers in order to mine a crypto currency :
  1. the cost of the hardware and the energy required to make a huge amount of mathematical operations connected to the blockchain network through the Internet,
  2. the human cost in order to deploy, maintain and keep running miners and blockchain nodes.
As the speculation actually drives the value of crypto currencies, it is not possible to answer if the mining activity is profitable or not. Moreover, hardware, energy and human costs are not the same around the globe. To appreciate if mining a crypto currency is profitable we should take all indirect costs : nature cost (for hardware and energy production), human cost (coins and contracts usage, social rights of blockchain workers).

Original: https://freddy.linuxtribe.frecherche-et-developpement/blockchain-cryptocurrency-mining/tera-crypto-currency-project/
Author: Freddy Frouin, [email protected].
submitted by Terafoundation to u/Terafoundation [link] [comments]

Updating the Scaling Roadmap | Paul Sztorc | Jul 10 2017

Paul Sztorc on Jul 10 2017:
Summary

In my opinion, Greg Maxwell's scaling roadmap [1] succeeded in a few
crucial ways. One success was that it synchronized the entire Bitcoin
community, helping to bring finality to the (endless) conversations of
that time, and get everyone back to work. However, I feel that the Dec
7, 2015 roadmap is simply too old to serve this function any longer. We
should revise it: remove what has been accomplished, introduce new
innovations and approaches, and update deadlines and projections.
Why We Should Update the Roadmap

In a P2P system like Bitcoin, we lack authoritative info-sources (for
example, a "textbook" or academic journal), and as a result
conversations tend to have a problematic lack of progress. They do not
"accumulate", as everyone must start over. Ironically, the scaling
conversation itself has a fatal O(n2) scaling problem.
The roadmap helped solve these problems by being constant in size, and
subjecting itself to publication, endorsement, criticism, and so forth.
Despite the (unavoidable) nuance and complexity of each individual
opinion, it was at least globally known that X participants endorsed Y
set of claims.
Unfortunately, the Dec 2015 roadmap is now 19 months old -- it is quite
obsolete and replacing it is long overdue. For example, it highlights
older items (CSV, compact blocks, versionbits) as being future
improvements, and makes no mention of new high-likelihood improvements
(Schnorr) or mis-emphasizes them (LN). It even contains mistakes (SegWit
fraud proofs). To read the old roadmap properly, one must already be a
technical expert. For me, this defeats the entire point of having one in
the first place.
A new roadmap would be worth your attention, even if you didn't sign it,
because a refusal to sign would still be informative (and, therefore,
helpful)!
So, with that in mind, let me present a first draft. Obviously, I am
strongly open to edits and feedback, because I have no way of knowing
everyone's opinions. I admit that I am partially campaigning for my
Drivechain project, and also for this "scalability"/"capacity"
distinction...that's because I believe in both and think they are
helpful. But please feel free to suggest edits.
I emphasized concrete numbers, and concrete dates.
And I did NOT necessarily write it from my own point of view, I tried
earnestly to capture a (useful) community view. So, let me know how I did.
==== Beginning of New ("July 2017") Roadmap Draft ====
This document updates the previous roadmap [1] of Dec 2015. The older
statement endorsed a belief that "the community is ready to deliver on
its shared vision that addresses the needs of the system while upholding
its values".
That belief has not changed, but the shared vision has certainly grown
sharper over the last 18 months. Below is a list of technologies which
either increase Bitcoin's maximum tps rate ("capacity"), or which make
it easier to process a higher volume of transactions ("scalability").
First, over the past 18 months, the technical community has completed a
number of items [2] on the Dec 2015 roadmap. VersonBits (BIP 9) enables
Bitcoin to handle multiple soft fork upgrades at once. Compact Blocks
(BIP 152) allows for much faster block propagation, as does the FIBRE
Network [3]. Check Sequence Verify (BIP 112) allows trading partners to
mutually update an active transaction without writing it to the
blockchain (this helps to enable the Lightning Network).
Second, Segregated Witness (BIP 141), which reorganizes data in blocks
to handle signatures separately, has been completed and awaits
activation (multiple BIPS). It is estimated to increase capacity by a
factor of 2.2. It also improves scalability in many ways. First, SW
includes a fee-policy which encourages users to minimize their impact on
the UTXO set. Second, SW achieves linear scaling of sighash operations,
which prevents the network from crashing when large transactions are
broadcast. Third, SW provides an efficiency gain for everyone who is not
verifying signatures, as these no longer need to be downloaded or
stored. SegWit is an enabling technology for the Lightning Network,
script versioning (specifically Schnorr signatures), and has a number of
benefits which
are unrelated to capacity [4].
Third, the Lightning Network, which allows users to transact without
broadcasting to the network, is complete [5, 6] and awaits the
activation of SegWit. For those users who are able to make a single
on-chain transaction, it is estimated to increase both capacity and
scalability by a factor of ~1000 (although these capacity increases will
vary with usage patterns). LN also greatly improves transaction speed
and transaction privacy.
Fourth, Transaction Compression [7], observes that Bitcoin transaction
serialization is not optimized for storage or network communication. If
transactions were optimally compressed (as is possible today), this
would improve scalability, but not capacity, by roughly 20%, and in some
cases over 30%.
Fifth, Schnorr Signature Aggregation, which shrinks transactions by
allowing many transactions to have a single shared signature, has been
implemented [8] in draft form in libsecp256k1, and will likely be ready
by Q4 of 2016. One analysis [9] suggests that signature aggregation
would result in storage and bandwidth savings of at least 25%, which
would therefore increase scalability and capacity by a factor of 1.33.
The relative savings are even greater for multisignature transactions.
Sixth, drivechain [10], which allows bitcoins to be temporarily
offloaded to 'alternative' blockchain networks ("sidechains"), is
currently under peer review and may be usable by end of 2017. Although
it has no impact on scalability, it does allow users to opt-in to
greater capacity, by moving their BTC to a new network (although, they
will achieve less decentralization as a result). Individual drivechains
may have different security tradeoffs (for example, a greater reliance
on UTXO commitments, or MimbleWimble's shrinking block history) which
may give them individually greater scalability than mainchain Bitcoin.
Finally, the capacity improvements outlined above may not be sufficient.
If so, it may be necessary to use a hard fork to increase the blocksize
(and blockweight, sigops, etc) by a moderate amount. Such an increase
should take advantage of the existing research on hard forks, which is
substantial [11]. Specifically, there is some consensus that Spoonnet
[12] is the most attractive option for such a hardfork. There is
currently no consensus on a hard fork date, but there is a rough
consensus that one would require at least 6 months to coordinate
effectively, which would place it in the year 2018 at earliest.
The above are only a small sample of current scaling technologies. And
even an exhaustive list of scaling technologies, would itself only be a
small sample of total Bitcoin innovation (which is proceeding at
breakneck speed).
Signed,
[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe011865.html
[2] https://bitcoincore.org/en/2017/03/13/performance-optimizations-1/
[3] http://bluematt.bitcoin.ninja/2016/07/07/relay-networks/
[4] https://bitcoincore.org/en/2016/01/26/segwit-benefits/
[5]
http://lightning.community/release/software/lnd/lightning/2017/05/03/litening/
[6] https://github.com/ACINQ/eclair
[7] https://people.xiph.org/~greg/compacted_txn.txt
[8]
https://github.com/ElementsProject/secp256k1-zkp/blob/d78f12b04ec3d9f5744cd4c51f20951106b9c41a/src/secp256k1.c#L592-L594
[9] https://bitcoincore.org/en/2017/03/23/schnorr-signature-aggregation/
[10] http://www.drivechain.info/
[11] https://bitcoinhardforkresearch.github.io/
[12]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html
==== End of Roadmap Draft ====
In short, please let me know:
  1. If you agree that it would be helpful if the roadmap were updated.
  2. To what extent, if any, you like this draft.
  3. Edits you would make (specifically, I wonder about Drivechain
thoughts and Hard Fork thoughts, particularly how to phrase the Hard
Fork date).
Google Doc (if you're into that kind of thing):
https://docs.google.com/document/d/1gxcUnmYl7yM0oKR9NY9zCPbBbPNocmCq-jjBOQSVH-A/edit?usp=sharing
Cheers,
Paul
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170710/60d2fe7d/attachment.sig
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014718.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Pentesterlab. ECDSA challenge

Hi there,

I am struggling with Pentesterlab challenge: https://pentesterlab.com/exercises/ecdsa

I'm wondering who can give some lights on how to resolve some steps in this challenge. You can read about similar challenge there - https://ropnroll.co.uk/2017/05/breaking-ecdsa/
I suppose I have problems with extracting (r,s) from ESDCA (SECP256k1) signature (here details - https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm)

I even try to brute-force all possible (r,s) values but no luck. Every time I receive error 500.

def recover_key(c1, sig1, c2, sig2, r_len, s_len): n = SECP256k1.order cookies = {} for s_idx in range(s_len, s_len + 2): for r_idx in range(r_len, r_len + 2): s1 = string_to_number(sig1[0 - s_idx:]) s2 = string_to_number(sig2[0 - s_idx:]) # https://bitcoin.stackexchange.com/questions/58853/how-do-you-figure-out-the-r-and-s-out-of-a-signature-using-python r1 = string_to_number(sig1[0 - (s_idx + r_idx + 2):0 - (s_idx)]) r2 = string_to_number(sig2[0 - (s_idx + r_idx + 2):0 - (s_idx)]) z1 = string_to_number(sha2(c1)) z2 = string_to_number(sha2(c2)) # Find cryptographically secure random k = (((z1 - z2) % n) * inverse_mod((s1 - s2), n)) % n # k = len(login1) # Recover private key da1 = ((((s1 * k) % n) - z1) * inverse_mod(r1, n)) % n # da2 = ((((s2 * k) % n) - z2) * inverse_mod(r2, n)) % n # SECP256k1 is the Bitcoin elliptic curve sk = SigningKey.from_secret_exponent(da1, curve=SECP256k1, hashfunc=hashlib.sha256) # create the signature login_tgt = "admin" # Sign account login_hash = sha2(login_tgt) signature = sk.sign(login_hash, k=k) # Create signature key sig_dic_key = "r" + str(r_idx) + "s" + str(s_idx) try: # because who trusts python vk = sk.get_verifying_key() vk.verify(signature, login_hash) print(sig_dic_key, " - good signature") except BadSignatureError: print(sig_dic_key, " - BAD SIGNATURE") 

Its very interesting challenge and I want to break ECDSA finally.
Thanks in advance
submitted by unk1nd0n3 to webappsec [link] [comments]

Marketing! not rly but..

Monero is my favorite thing as of now. I mean like ever in the whole world. Its potential to basically free the world (of government tyranny, censorship, famine, central bankers, etc) is by far the most promising of any crypto out there, and probably more promising than literally any other thing or movement in the entire history of the world (besides OneCoin or Darth Vader). I am by no means a Dash pump and dumper where I just wanna see the price moon immediately (not that I'm opposed to that, obviously), or a Zcash fool where I have no idea what cryptocurrency is about (Zcash is a cult, imo), I'm a Moner, or whatever Monero people are called.
I just want it to be used! Which means, first and foremost, people have to be aware of its existence. What I've been doing, and I don't recommend this, unless you have a mountain of monero and an opinion of it as high as I do, is tell people to download the monero wallet, in exchange for me sending them a monero. I think this is much more effective than telling them "CryptoNight is much more ASIC resistant than SHA256!" or "ed25519 is more secure than secp256k1!" or whatever. It allows them to feel good when monero gets more expensive, maybe sad when it gets cheaper, but that emotional interaction with monero isn't something that's just forgotten, like those weird words they've never heard before are. First, they are given something they understand (an asset that might gain or lose value), and later they are waaay more likely to "fall down the rabbit hole" than some guy who doesn't own monero. (thus increasing user base and therefore security, privacy, and fungibility, and hopefully they'll not be able to contain their excitement, like me and have to tell yet more people) It's getting to be prohibitively expensive to do this, and I find that a lot of people simply will not accept less than an entire monero for that deal, idk why. Maybe cuz the current paradigm of nothing before the decimal signifies "change", i.e. worthless, or not worth downloading an "annoying blocktrain thing" (real quote). This is vexing. I know I'm not gonna go to my grandma and explain everything and have her be like "dam thats sweet im buying monero".
For people my age, (I'm 20) I want to be able to convey what it's all about, concisely and in a way that excites people more than something that's out of the scope of the concepts they understand. I'm sure some of you more enthusiastic Moners (or monerites?) have "converted" some crypto-foreigners. What's the best way to actively do this? Is there like a video I can sic on em or something? Every fluffyponyza presentation I've ever seen would go like 32.4 miles above the average person's head. They won't read getmonero, or do work to figure out what this obscure and seemingly boring thing is. Some people, especially now, won't be convinced that monero is worth their time, but a lot are open to it. How to effectively reach as many as possible is what I'm after. Would it be legal to have a FFS for a video or a small PR type deal about monero? Not that its gonna instantly make it the world reserve currency, but it could help attract devs or something, idk. Do you guys think something like that would be worth it?
I personally feel that it is important for monero to gain positive exposure, probably more than most people on here. If monero is this thing that people only hear about when they read a news report on a drug bust or ransomware attack, we'll have to go through the same phase bitcoin went through where the establishment along with Joe the Plumber wants it made illegal, or subject to more stringent KYC or AML regulations, cuz "only criminals use it". It is possible that some other crypto takes monero market share because of this. When bitcoin was at that stage, there was bitcoin. That was the crypto space. Now there are tempting scamcoins around every corner and I'd hate to see monero fall by the wayside, even temporarily, in lieu of some ridiculous thing like Zcash, which imo misses the whole point of cryptocurrency, or some sockpuppetty corporatecoin like eth. Privacy IS for everyone and I think it might be possible to skip this phase entirely, given the right information is widely distributed.
tl;dr How can I make the fundamental concepts behind Monero accessible and attractive to the average computer-using person, or potential devs who might otherwise be swayed to working on another alt or Bitcoin instead? ps Sorry for the long post I'm a piss poor writer and can't be concise
Edit: for formatting so its not a brick of words
submitted by BifocalComb to Monero [link] [comments]

Find Public Key

Given the equation K = k * G where K is the public key, G is the Generator point and k is the private key
G is constant based here
Let's take for example the private key 1
k = 0000000000000000000000000000000000000000000000000000000000000001 G = 0479BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
the public key is 0479BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
I'm quite confused how it resulted to the K value. Are there working implementation for the equation K = k * G? So I can follow how it derived the public key.
Or I am missing something here?
submitted by quin24 to Bitcoin [link] [comments]

Updating the Scaling Roadmap [Update] | Paul Sztorc | Jul 17 2017

Paul Sztorc on Jul 17 2017:
Hello,
Last week I posted about updating the Core Scalability Roadmap.
I'm not sure what the future of it is, given that it was concept NACK'ed
by Greg Maxwell the author of the original roadmap, who said that he
regretted writing the first one.
Nonetheless, it was ACKed by everyone else that I heard from, except for
Tom Zander (who objected that it should be a specific project document,
not a "Bitcoin" document -- I sortof agree and decided to label it a
"Core" document -- whether or not anything happens with that label is up
to the community).
I therefore decided to:
  1. Put the draft on GitHub [1]
  2. Update it based on all of the week 1 feedback [2]
  3. Add some spaces at the bottom for comments / expressions of interest [2]
However, without interest from the maintainers of bitcoincore.org
(specifically these [3, 4] pages and similar) the document will probably
be unable to gain traction.
Cheers,
Paul
[1] https://github.com/psztorc/btc-core-capacity-2/blob/mastedraft.txt
[2]
https://github.com/psztorc/btc-core-capacity-2/commit/2b4f0ecc9015ee398ce0486ca5c3613e3b929c00
[3] https://bitcoincore.org/en/2015/12/21/capacity-increase/
[4] https://bitcoincore.org/en/2015/12/23/capacity-increases-faq/
On 7/10/2017 12:50 PM, Paul Sztorc wrote:
Summary

In my opinion, Greg Maxwell's scaling roadmap [1] succeeded in a few
crucial ways. One success was that it synchronized the entire Bitcoin
community, helping to bring finality to the (endless) conversations of
that time, and get everyone back to work. However, I feel that the Dec
7, 2015 roadmap is simply too old to serve this function any longer. We
should revise it: remove what has been accomplished, introduce new
innovations and approaches, and update deadlines and projections.
Why We Should Update the Roadmap

In a P2P system like Bitcoin, we lack authoritative info-sources (for
example, a "textbook" or academic journal), and as a result
conversations tend to have a problematic lack of progress. They do not
"accumulate", as everyone must start over. Ironically, the scaling
conversation itself has a fatal O(n2) scaling problem.
The roadmap helped solve these problems by being constant in size, and
subjecting itself to publication, endorsement, criticism, and so forth.
Despite the (unavoidable) nuance and complexity of each individual
opinion, it was at least globally known that X participants endorsed Y
set of claims.
Unfortunately, the Dec 2015 roadmap is now 19 months old -- it is quite
obsolete and replacing it is long overdue. For example, it highlights
older items (CSV, compact blocks, versionbits) as being future
improvements, and makes no mention of new high-likelihood improvements
(Schnorr) or mis-emphasizes them (LN). It even contains mistakes (SegWit
fraud proofs). To read the old roadmap properly, one must already be a
technical expert. For me, this defeats the entire point of having one in
the first place.
A new roadmap would be worth your attention, even if you didn't sign it,
because a refusal to sign would still be informative (and, therefore,
helpful)!
So, with that in mind, let me present a first draft. Obviously, I am
strongly open to edits and feedback, because I have no way of knowing
everyone's opinions. I admit that I am partially campaigning for my
Drivechain project, and also for this "scalability"/"capacity"
distinction...that's because I believe in both and think they are
helpful. But please feel free to suggest edits.
I emphasized concrete numbers, and concrete dates.
And I did NOT necessarily write it from my own point of view, I tried
earnestly to capture a (useful) community view. So, let me know how I did.
==== Beginning of New ("July 2017") Roadmap Draft ====
This document updates the previous roadmap [1] of Dec 2015. The older
statement endorsed a belief that "the community is ready to deliver on
its shared vision that addresses the needs of the system while upholding
its values".
That belief has not changed, but the shared vision has certainly grown
sharper over the last 18 months. Below is a list of technologies which
either increase Bitcoin's maximum tps rate ("capacity"), or which make
it easier to process a higher volume of transactions ("scalability").
First, over the past 18 months, the technical community has completed a
number of items [2] on the Dec 2015 roadmap. VersonBits (BIP 9) enables
Bitcoin to handle multiple soft fork upgrades at once. Compact Blocks
(BIP 152) allows for much faster block propagation, as does the FIBRE
Network [3]. Check Sequence Verify (BIP 112) allows trading partners to
mutually update an active transaction without writing it to the
blockchain (this helps to enable the Lightning Network).
Second, Segregated Witness (BIP 141), which reorganizes data in blocks
to handle signatures separately, has been completed and awaits
activation (multiple BIPS). It is estimated to increase capacity by a
factor of 2.2. It also improves scalability in many ways. First, SW
includes a fee-policy which encourages users to minimize their impact on
the UTXO set. Second, SW achieves linear scaling of sighash operations,
which prevents the network from crashing when large transactions are
broadcast. Third, SW provides an efficiency gain for everyone who is not
verifying signatures, as these no longer need to be downloaded or
stored. SegWit is an enabling technology for the Lightning Network,
script versioning (specifically Schnorr signatures), and has a number of
benefits which
are unrelated to capacity [4].
Third, the Lightning Network, which allows users to transact without
broadcasting to the network, is complete [5, 6] and awaits the
activation of SegWit. For those users who are able to make a single
on-chain transaction, it is estimated to increase both capacity and
scalability by a factor of ~1000 (although these capacity increases will
vary with usage patterns). LN also greatly improves transaction speed
and transaction privacy.
Fourth, Transaction Compression [7], observes that Bitcoin transaction
serialization is not optimized for storage or network communication. If
transactions were optimally compressed (as is possible today), this
would improve scalability, but not capacity, by roughly 20%, and in some
cases over 30%.
Fifth, Schnorr Signature Aggregation, which shrinks transactions by
allowing many transactions to have a single shared signature, has been
implemented [8] in draft form in libsecp256k1, and will likely be ready
by Q4 of 2016. One analysis [9] suggests that signature aggregation
would result in storage and bandwidth savings of at least 25%, which
would therefore increase scalability and capacity by a factor of 1.33.
The relative savings are even greater for multisignature transactions.
Sixth, drivechain [10], which allows bitcoins to be temporarily
offloaded to 'alternative' blockchain networks ("sidechains"), is
currently under peer review and may be usable by end of 2017. Although
it has no impact on scalability, it does allow users to opt-in to
greater capacity, by moving their BTC to a new network (although, they
will achieve less decentralization as a result). Individual drivechains
may have different security tradeoffs (for example, a greater reliance
on UTXO commitments, or MimbleWimble's shrinking block history) which
may give them individually greater scalability than mainchain Bitcoin.
Finally, the capacity improvements outlined above may not be sufficient.
If so, it may be necessary to use a hard fork to increase the blocksize
(and blockweight, sigops, etc) by a moderate amount. Such an increase
should take advantage of the existing research on hard forks, which is
substantial [11]. Specifically, there is some consensus that Spoonnet
[12] is the most attractive option for such a hardfork. There is
currently no consensus on a hard fork date, but there is a rough
consensus that one would require at least 6 months to coordinate
effectively, which would place it in the year 2018 at earliest.
The above are only a small sample of current scaling technologies. And
even an exhaustive list of scaling technologies, would itself only be a
small sample of total Bitcoin innovation (which is proceeding at
breakneck speed).
Signed,

[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe011865.html
[2] https://bitcoincore.org/en/2017/03/13/performance-optimizations-1/
[3] http://bluematt.bitcoin.ninja/2016/07/07/relay-networks/
[4] https://bitcoincore.org/en/2016/01/26/segwit-benefits/
[5]
http://lightning.community/release/software/lnd/lightning/2017/05/03/litening/
[6] https://github.com/ACINQ/eclair
[7] https://people.xiph.org/~greg/compacted_txn.txt
[8]
https://github.com/ElementsProject/secp256k1-zkp/blob/d78f12b04ec3d9f5744cd4c51f20951106b9c41a/src/secp256k1.c#L592-L594
[9] https://bitcoincore.org/en/2017/03/23/schnorr-signature-aggregation/
[10] http://www.drivechain.info/
[11] https://bitcoinhardforkresearch.github.io/
[12]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html
==== End of Roadmap Draft ====
In short, please let me know:
  1. If you agree that it would be helpful if the roadmap were updated.
  2. To what ext...[message truncated here by reddit bot]...
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014802.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

MaxCoin Specifications. Important

Quick Technicals
Cryptography Tech Spec
MaxCoin uses the Keccak (SHA-3) hashing algorithm for its Proof-of-Work. Keccak was selected as an alternative to the NSA designed SHA256 after a 5-year long competition held by the NIST and will be seen increasingly as the algorithm used in banking and other secure applications. A single round of Keccak is used, resulting in a 256 bit hash.
We have also implemented a provably-secure signing algorithm, EC-Schnorr. Every existing cryptocurrency uses the ECDSA algorithm, as chosen by Satoshi; whilst ECDSA is in common use and is secure, EC-Schnorr is provably more secure and is currently being recommended over it (https://www.enisa.europa.eu/activities/identity-and-trust/library/deliverables/algorithms-key-sizes-and-parameters-report/at_download/fullReport). Additionally, MaxCoin changes the elliptic curve utilised within the signing algorithms from a Koblitz curve, secp256k1, to a more secure psuedo-random one, secp256r1. The use of the latter curve is recommended almost universally - and the decision by Satoshi to use the former is one that is often queried in the Bitcoin world. One theory is that there are some speed advantages to using the Koblitz curve, but, the implementation used in Bitcoin (OpenSSL) does not make use of this optimisation and, thus, the result is reduced-security.
The cryptography choices within MaxCoin have been made to maximise security and, where possible, to minimise NSA influence. We have been advised throughout by the renowed cryptography expert Professor Nigel Smart (https://en.wikipedia.org/wiki/Nigel_Smart_(cryptographer)).
These changes also lay the foundation for some key features we're aiming to implement in MaxCoin over the coming months, so while they may currently appear uninteresting changes they pave the way for our future growth.
What do you mean by "Starting Algorithm"?
This is an issue of hardware miner resistance, such as ASICs. Keccak is the starting algorithm for MaxCoin and at this point in time no hardware miner currently exists. However, creating a Keccak ASIC is not impossible. Therefore, in order to protect against a hardware-miner future we are going to implement an "ASIC protection" feature into MaxCoin. This will work by allowing the blockchain to decide a new hashing algorithm for MaxCoin every x blocks. More specifically, the last authenticated transaction's hash is used to determine an integer and depending on this value an algorithm will be selected. This will mean hardware miners will find it difficult to create hardware in enough time to see profitable return. Purely for example, these could be:
x Algorithm 0 Keccak 1 Blake 2 Grostlx2 3 JH 4 Skein 5 Blake2 6 JH(Grostl) 7 Keccak+Blake
Difficulty & Distribution
MaxCoin will have a zero % premine, proven by the timestamps of the first blocks in a block explorer, and we have attempted to combat low-difficulty instamining with a fast retarget rate up until block 200. At block 200 the Kimoto Gravity Well implementation will take over the retargeting.
Mining is done via CPU at release (mining guides about to be released also on this subreddit), but a GPU miner will not be far away. We've seen some versions in the works already after we released CPUminer yesterday, and while we have not yet seen a working version, this is very unlikely to take long. We'll update all official channels with Keccak GPU miner once it is available. It's also worth noting that any GPU miner created will not work after the first algorithm switch takes place.
submitted by maxcoinproject to maxcoinproject [link] [comments]

Skycoin Meshnet Project: Skywire Updates

The wifi controller library is now on Github. Its working on Ubuntu and possibly Debian.
Non-Mesh Stuff:
We still have a lot of work to do, but have been making very good progress.
submitted by skycoin to darknetplan [link] [comments]

Namechains

Namechain Domain Example
Generate a private key from UTF-8('namechain') = 6E 61 6D 65 63 68 61 69 6E (with a bunch of leading zeros to be accurate).
Generate the pubkey: secp256k1(6E 61 6D 65 63 68 61 69 6E)
Generate an actually secure private key.
Generate a pubkey from that.
Create a 2 of 2 multisig address from those two pubkeys. I should note here that for generic TLD names, ideally we wouldn't require a second key and a multisig address, only the first keypair composed of the 'name' keys. The reason I have the second keypair in this iteration is because I'm not proficient enough with P2SH 'redeem scripts' to know how to write a script that guarantees funds flowing through an address without someone being able to sniff node traffic and try and double spend those funds as they 'flow through' (or if that script is possible but I'm fairly certain it is?). This would be pretty useful. As for now, it's just extra data (so extra costs), but it does allow for a 'name' to sign names under it similar to a CA scheme, effectively creating two chains from a name: a signed one and an unsigned one. Maybe this is desirable.
Anyways, fund that multisig address. Spend those funds to an 'ownership' address to expose the public keys (used for verifying names/values with keys). This conveniently lets us establish an ownership address though, whose address itself can be used as the corresponding key to our name that allows us to generate signatures to verify ownership (and can be used later for transferring).
So now a name and key are paired together. If someone signs a message saying they are 'namechain', we can verify this by searching through transactions, attempting to find a public key that matches that name by searching for the first 'place-in-chain' transaction with a signature whose public key matches secp256k1(UTF-8('namechain')) and verify the signature from the ownership address one transaction away (we can use the ownership address to devise a protocol for transferring names too). Even though it's a multisig transaction, each key is exposed individually, so the convention is the 'name' key is always first or something.
Problems:
This method makes names a function of transactions (which miners will like), makes names hard to prune? (if you care about them), effectively putting trust into the blockchain rather than a third party. Names are still discoverable in the sense that you can create a list of names and scan the blockchain for them. As far as how names are domains, consider that namechain is a part of the bitcoin domain. Registering an unsigned name under the namechain domain might involve creating a multisig transaction that includes 'namechain''s keys, your new name's keys, and for now, that pair of secure keys that ensures you don't get robbed while registering. For signed names, that 2 of 2 multisig address you used to register it would act as the signing address similar to a CA scheme. This means we can go as deep as bitcoin allows addresses in a P2SH multisig 'address' (i think 15 or 20, and probably infinite eventually, the current limit is only enforced by the client?). Possible uses for business people: A service that allows users to register a name, and that offers up an API for verifying names and signatures for websites, effectively creating a universal login (because anyone can do it themselves too!). Or a CA service like I mentioned before. Maybe a certain company is developing a compact node you can use at home to verify this kind of stuff and ultimately act as a sort of 1/2FA for pretty much anything. You could do hostname resolution from it by creating a second private key equivalent to an IPv4 or IPv6 address to match with a name and signature. Orphaned blocks are really only a problem if two people register one name in two different valid blocks at the same time, so pretty damn unlikely. There a ton of places to optimize this, but the point is it allows me to associate a name or value with a public key.
Miners could choose to ignore blocks of a name-registration structure without additional fees. Solving the name 'lottery' scenario by using bitcoin Script to allow for registration based on the previously registered name and/or block and necessitating anyone registering a name to create identical transactions would turn the lottery into a 'bid war' per block. Might not be possible. Haven't been looking at the protocol for too long. Tear it to shreds please. Looking into building a proof of concept.
submitted by ftlio to Bitcoin [link] [comments]

Rolling UTXO set hashes | Pieter Wuille | May 15 2017

Pieter Wuille on May 15 2017:
Hello all,
I would like to discuss a way of computing a UTXO set hash that is
very efficient to update, but does not support any compact proofs of
existence or non-existence.
Much has been written on the topic of various data structures and
derived hashes for the UTXO/TXO set before (including Alan Reiner's
trust-free lite nodes [1], Peter Todd's TXO MMR commitments [2] [3],
or Bram Cohen's TXO bitfield [4]). They all provide interesting extra
functionality or tradeoffs, but require invasive changes to the P2P
protocol or how wallets work, or force nodes to maintain their
database in a normative fashion. Instead, here I focus on an efficient
hash that supports nothing but comparing two UTXO sets. However, it is
not incompatible with any of those other approaches, so we can gain
some of the advantages of a UTXO hash without adopting something that
may be incompatible with future protocol enhancements.
  1. Incremental hashing
Computing a hash of the UTXO set is easy when it does not need
efficient updates, and when we can assume a fixed serialization with a
normative ordering for the data in it - just serialize the whole thing
and hash it. As different software or releases may use different
database models for the UTXO set, a solution that is order-independent
would seem preferable.
This brings us to the problem of computing a hash of unordered data.
Several approaches that accomplish this through incremental hashing
were suggested in [5], including XHASH, AdHash, and MuHash. XHASH
consists of first hashing all the set elements independently, and
XORing all those hashes together. This is insecure, as Gaussian
elimination can easily find a subset of random hashes that XOR to a
given value. AdHash/MuHash are similar, except addition/multiplication
modulo a large prime are used instead of XOR. Wagner [6] showed that
attacking XHASH or AdHash is an instance of a generalized birthday
problem (called the k-sum problem in his paper, with unrestricted k),
and gives a O(22*sqrt(n-1)) algorithm to attack it (for n-bit
hashes). As a result, AdHash with 256-bit hashes only has 31 bits of
security.
Thankfully, [6] also shows that the k-sum problem cannot be
efficiently solved in groups in which the discrete logarithm problem
is hard, as an efficient k-sum solver can be used to compute discrete
logarithms. As a result, MuHash modulo a sufficiently large safe prime
is provably secure under the DL assumption. Common guidelines on
security parameters [7] say that 3072-bit DL has about 128 bits of
security. A final 256-bit hash can be applied to the 3072-bit result
without loss of security to reduce the final size.
An alternative to multiplication modulo a prime is using an elliptic
curve group. Due to the ECDLP assumption, which the security of
Bitcoin signatures already relies on, this also results in security
against k-sum solving. This approach is used in the Elliptic Curve
Multiset Hash (ECMH) in [8]. For this to work, we must "hash onto a
curve point" in a way that results in points without known discrete
logarithm. The paper suggests using (controversial) binary elliptic
curves to make that operation efficient. If we only consider
secp256k1, one approach is just reading potential X coordinates from a
PRNG until one is found that has a corresponding Y coordinate
according to the curve equation. On average, 2 iterations are needed.
A constant time algorithm to hash onto the curve exists as well [9],
but it is only slightly faster and is much more complicated to
implement.
AdHash-like constructions with a sufficiently large intermediate hash
can be made secure against Wagner's algorithm, as suggested in [10].
4160-bit hashes would be needed for 128 bits of security. When
repetition is allowed, [8] gives a stronger attack against AdHash,
suggesting that as much as 400000 bits are needed. While repetition is
not directly an issue for our use case, it would be nice if
verification software would not be required to check for duplicated
entries.
  1. Efficient addition and deletion
Interestingly, both ECMH and MuHash not only support adding set
elements in any order but also deleting in any order. As a result, we
can simply maintain a running sum for the UTXO set as a whole, and
add/subtract when creating/spending an output in it. In the case of
MuHash it is slightly more complicated, as computing an inverse is
relatively expensive. This can be solved by representing the running
value as a fraction, and multiplying created elements into the
numerator and spent elements into the denominator. Only when the final
hash is desired, a single modular inverse and multiplication is needed
to combine the two.
As the update operations are also associative, H(a)+H(b)+H(c)+H(d) can
in fact be computed as (H(a)+H(b)) + (H(c)+H(d)). This implies that
all of this is perfectly parallellizable: each thread can process an
arbitrary subset of the update operations, allowing them to be
efficiently combined later.
  1. Comparison of approaches
Numbers below are based on preliminary benchmarks on a single thread
of a i7-6820HQ CPU running at 3.4GHz.
(1) (MuHash) Multiplying 3072-bit hashes mod 23072 - 1103717 (the
largest 3072-bit safe prime).
* Needs a fast modular multiplication/inverse implementation. * Using SHA512 + ChaCha20 for generating the hashes takes 1.2us per element. * Modular multiplication using GMP takes 1.5us per element (2.5us 
with a 60-line C+asm implementation).
* 768 bytes for maintaining a running sum (384 for numerator, 384 
for denominator)
* Very common security assumption. Even if the DL assumption would 
be broken (but no k-sum algorithm faster than Wagner's is found), this
still maintains 110 bits of security.
(2) (ECMH) Adding secp256k1 EC points
* Much more complicated than the previous approaches when 
implementing from scratch, but almost no extra complexity when ECDSA
secp256k1 signature validation is already implemented.
* Using SHA512 + libsecp256k1's point decompression for generating 
the points takes 11us per element on average.
* Addition/subtracting of N points takes 5.25us + 0.25us*N. * 64 bytes for a running sum. * Identical security assumption as Bitcoin's signatures. 
Using the numbers above, we find that:
24ms (2) 100ms
block takes (1) 3ms (2) 0.5ms
Note that while (2) has higher CPU usage than (1) in general, it has
lower latency when using precomputed per-transaction aggregates. Using
such aggregates is also more feasible as they're only 64 bytes rather
than 768. Because of simplicity, (1) has my preference.
Overall, these numbers are sufficiently low (note that they can be
parallellized) that it would be reasonable for full nodes and/or other
software to always maintain one of them, and effectively have a
rolling cryptographical checksum of the UTXO set at all times.
  1. Use cases
computation. This currently requires minutes of I/O and CPU, as it
serializes and hashes the entire UTXO set. A rolling set hash would
make this instant, making the whole RPC much more usable for sanity
checking.
blocks/UTXO sets.
the past few blocks (computed on the fly), a consistency check can be
done that recomputes it based on the database.
[1] https://bitcointalk.org/index.php?topic=88208.0
[2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
[3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013591.html
[4] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013928.html
[5] https://cseweb.ucsd.edu/~mihipapers/inchash.pdf
[6] https://people.eecs.berkeley.edu/~daw/papers/genbday.html
[7] https://www.keylength.com/
[8] https://arxiv.org/pdf/1601.06502.pdf
[9] https://www.di.ens.f~fouque/pub/latincrypt12.pdf
[10] http://csrc.nist.gov/groups/ST/hash/sha-3/Aug2014/documents/gligoroski_paper_sha3_2014_workshop.pdf
Cheers,

Pieter
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Recibí la Liquid Torch Satoshi Nakamoto Had Outside Cryptography Help, Says Early Bitcoin Dev The Elliptic Curve Digital Signature Algorithm and raw transactions on Bitcoin Bitcoin to Die - its unavoidable. The death of crypto and the blockchain. I Put A Payment Chip In My Hand To Replace My Wallet

The following are 30 code examples for showing how to use ecdsa.SECP256k1(). These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar. You may also want to check out all available ... Q&A for Bitcoin crypto-currency enthusiasts. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange Does secp256k1 always return a valid group element for any 32 byte value? ... bitcoin-core bitcoincore-development cryptography secp256k1 asked Jul 3 '17 at 18:08 From Bitcoin Wiki: secp256k1 refers to the parameters of the ECDSA curve used in Bitcoin, and is defined in ... Only “m” (slope) value changes. Let’s actually try to calculate the public key pair of private key 2 and 3. Advertisement. We know that the public key of 1 is G. So, Public key of 2 will be G(P) + G(Q) = 2G(R). You can see that the points are in 2nd form (P == Q). m = 3G x 2 ... Hello. I am working on a Ruby binding for libsecp256k1. When I was writing a test for secp256k1_ecdsa_sign, I was surprised that the signature it produced was different from the signature produced by my pure-Ruby ECDSA implementation, even though I had passed the same arguments to both. I looked into it and I determined that the difference was caused by these lines in libsecp256k1, which force ...

[index] [47646] [49844] [24295] [30781] [51498] [8388] [9368] [38758] [27574] [43834]

Recibí la Liquid Torch

According to crypto advocate Roger Ver, Bitcoin Cash ( BCH ) is the only “recent” coin to be widely used on the dark web. “The only other coins being used on the darknet markets are the ... Bitcoin makes use of two hashing functions, SHA-256 and RIPEMD-160, but it also uses Elliptic Curve DSA on the curve secp256k1 to perform signatures. The C++ implementation uses a local copy of ... “What’s next? Paying with the butt?” Check out more awesome videos at BuzzFeedVideo! http://bit.ly/YTbuzzfeedvideo BASED ON THIS BUZZFEED POST: https://www.b... Skip navigation Sign in. Search 本集讨论的内容有: 简单描述比特币地址、公钥、私钥 椭圆曲线加密 有可能暴力破解比特币私钥吗

#