FPGA bitFlyer USA

How are FPGAs used in trading?

A field-programmable gate array (FPGA) is a chip that can be programmed to suit whatever purpose you want, as often as you want it and wherever you need it. FPGAs provide multiple advantages, including low latency, high throughput and energy efficiency.
To fully understand what FPGAs offer, imagine a performance spectrum. At one end, you have the central processing unit (CPU), which offers a generic set of instructions that can be combined to carry out an array of different tasks. This makes a CPU extremely flexible, and its behaviour can be defined through software. However, CPUs are also slow because they have to select from the available generic instructions to complete each task. In a sense, they’re a “jack of all trades, but a master of none”.
At the other end of the spectrum sit application-specific integrated circuits (ASICs). These are potentially much faster because they have been built with a single task in mind, making them a “master of one trade”. This is the kind of chip people use to mine bitcoin, for example. The downside of ASICs is that they can’t be changed, and they cost time and money to develop. FPGAs offer a perfect middle ground: they can be significantly faster than a CPU and are more flexible than ASICs.
FPGAs contain thousands, sometimes even millions, of so-called core logic blocks (CLBs). These blocks can be configured and combined to process any task that can be solved by a CPU. Compared with a CPU, FPGAs aren’t burdened by surplus hardware that would otherwise slow you down. They can therefore be used to carry out specific tasks quickly and effectively, and can even process several tasks simultaneously. These characteristics make them popular across a wide range of sectors, from aerospace to medical engineering and security systems, and of course finance.
How are FPGAs used in the financial services sector?
Speed and versatility are particularly important when buying or selling stocks and other securities. In the era of electronic trading, decisions are made in the blink of an eye. As prices change and orders come and go, companies are fed new information from exchanges and other sources via high-speed networks. This information arrives at high speeds, with time measured in nanoseconds. The sheer volume and speed of data demands a high bandwidth to process it all. Specialized trading algorithms make use of the new information in order to make trades. FPGAs provide the perfect platform to develop these applications, as they allow you to bypass non-essential software as well as generic-purpose hardware.
How do market makers use FPGAs to provide liquidity?
As a market maker, IMC provides liquidity to buyers and sellers of financial instruments. This requires us to price every instrument we trade and to react to the market accordingly. Valuation is a view on what the price of an asset should be, which is handled by our traders and our automated pricing algorithms. When a counterpart wants to buy or sell an asset on a trading venue, our role is to always be there and offer, or bid, a fair price for the asset. FPGAs enable us to perform this key function in the most efficient way possible.
At IMC, we keep a close eye on emerging technologies that can potentially improve our business. We began working with FPGAs more than a decade ago and are constantly exploring ways to develop this evolving technology. We work in a competitive industry, so our engineers have to be on their toes to make sure we’re continuously improving.
What does an FPGA engineer do?
Being an FPGA engineer is all about learning and identifying new solutions to challenges as they arise. A software developer can write code in a software language and know within seconds whether it works, and so deploy it quickly. However, the code will have to go through several abstraction layers and generic hardware components. Although you can deploy the code quickly, you do not get the fastest possible outcome.
As an FPGA engineer, it may take two to three hours of compilation time before you know whether your adjustment will result in the outcome you want. However, you can increase performance at the cost of more engineering time. The day-to-day challenge you face is how to make the process as efficient as possible with the given trade-offs while pushing the boundaries of the FPGA technology.
Skills needed to be an FPGA engineer
Things change extremely rapidly in the trading world, and agility is the name of the game. Unsurprisingly, FPGA engineers tend to enjoy a challenge. To work as an FGPA engineer at a company like IMC, you have to be a great problem-solver, a quick learner and highly adaptable.
What makes IMC a great fit for an FPGA engineer?
IMC offers a great team dynamic. We are a smaller company than many larger technology or finance houses, and we operate very much like a family unit. This means that, as a graduate engineer, you’ll never be far from the action, and you’ll be able to make an impact from day one.
Another key difference is that you’ll get to see the final outcome of your work. If you come up with an idea, we’ll give you the chance to make it work. If it does, you’ll see the results put into practice in a matter of days, which is always a great feeling. If it doesn’t, you’ll get to find out why – so there’s an opportunity to learn and improve for next time.
Ultimately, working at IMC is about having skin in the game. You’ll be entrusted with making your own decisions. And you’ll be working side by side with super smart people who are open-minded and always interested in hearing your ideas. Market making is a technology-dependent process, and we’re all in this together.
Think you have what it takes to make a difference at a technology graduate at IMC? Check out our graduate opportunities page.
submitted by IMC_Trading to u/IMC_Trading [link] [comments]

thank you Santa

thank you Santa submitted by chishiki to gpumining [link] [comments]

Transcript of discussion between an ASIC designer and several proof-of-work designers from #monero-pow channel on Freenode this morning

[08:07:01] lukminer contains precompiled cn/r math sequences for some blocks: https://lukminer.org/2019/03/09/oh-kay-v4r-here-we-come/
[08:07:11] try that with RandomX :P
[08:09:00] tevador: are you ready for some RandomX feedback? it looks like the CNv4 is slowly stabilizing, hashrate comes down...
[08:09:07] how does it even make sense to precompile it?
[08:09:14] mine 1% faster for 2 minutes?
[08:09:35] naturally we think the entire asic-resistance strategy is doomed to fail :) but that's a high-level thing, who knows. people may think it's great.
[08:09:49] about RandomX: looks like the cache size was chosen to make it GPU-hard
[08:09:56] looking forward to more docs
[08:11:38] after initial skimming, I would think it's possible to make a 10x asic for RandomX. But at least for us, we will only make an ASIC if there is not a total ASIC hostility there in the first place. That's better for the secret miners then.
[08:13:12] What I propose is this: we are working on an Ethash ASIC right now, and once we have that working, we would invite tevador or whoever wants to come to HK/Shenzhen and we walk you guys through how we would make a RandomX ASIC. You can then process this input in any way you like. Something like that.
[08:13:49] unless asics (or other accelerators) re-emerge on XMR faster than expected, it looks like there is a little bit of time before RandomX rollout
[08:14:22] 10x in what measure? $/hash or watt/hash?
[08:14:46] watt/hash
[08:15:19] so you can make 10 times more efficient double precisio FPU?
[08:16:02] like I said let's try to be productive. You are having me here, let's work together!
[08:16:15] continue with RandomX, publish more docs. that's always helpful.
[08:16:37] I'm trying to understand how it's possible at all. Why AMD/Intel are so inefficient at running FP calculations?
[08:18:05] midipoet ([email protected]/web/irccloud.com/x-vszshqqxwybvtsjm) has joined #monero-pow
[08:18:17] hardware development works the other way round. We start with 1) math then 2) optimization priority 3) hw/sw boundary 4) IP selection 5) physical implementation
[08:22:32] This still doesn't explain at which point you get 10x
[08:23:07] Weren't you the ones claiming "We can accelerate ProgPoW by a factor of 3x to 8x." ? I find it hard to believe too.
[08:30:20] sure
[08:30:26] so my idea: first we finish our current chip
[08:30:35] from simulation to silicon :)
[08:30:40] we love this stuff... we do it anyway
[08:30:59] now we have a communication channel, and we don't call each other names immediately anymore: big progress!
[08:31:06] you know, we russians have a saying "it was smooth on paper, but they forgot about ravines"
[08:31:12] So I need a bit more details
[08:31:16] ha ha. good!
[08:31:31] that's why I want to avoid to just make claims
[08:31:34] let's work
[08:31:40] RandomX comes in Sep/Oct, right?
[08:31:45] Maybe
[08:32:20] We need to audit it first
[08:32:31] ok
[08:32:59] we don't make chips to prove sw devs that their assumptions about hardware are wrong. especially not if these guys then promptly hardfork and move to the next wrong assumption :)
[08:33:10] from the outside, this only means that hw & sw are devaluing each other
[08:33:24] neither of us should do this
[08:33:47] we are making chips that can hopefully accelerate more crypto ops in the future
[08:33:52] signing, verifying, proving, etc.
[08:34:02] PoW is just a feature like others
[08:34:18] sech1: is it easy for you to come to Hong Kong? (visa-wise)
[08:34:20] or difficult?
[08:34:33] or are you there sometimes?
[08:34:41] It's kind of far away
[08:35:13] we are looking forward to more RandomX docs. that's the first step.
[08:35:31] I want to avoid that we have some meme "Linzhi says they can accelerate XYZ by factor x" .... "ha ha ha"
[08:35:37] right? we don't want that :)
[08:35:39] doc is almost finished
[08:35:40] What docs do you need? It's described pretty good
[08:35:41] so I better say nothing now
[08:35:50] we focus on our Ethash chip
[08:36:05] then based on that, we are happy to walk interested people through the design and what else it can do
[08:36:22] that's a better approach from my view than making claims that are laughed away (rightfully so, because no silicon...)
[08:36:37] ethash ASIC is basically a glorified memory controller
[08:36:39] sech1: tevador said something more is coming (he just did it again)
[08:37:03] yes, some parts of RandomX are not described well
[08:37:10] like dataset access logic
[08:37:37] RandomX looks like progpow for CPU
[08:37:54] yes
[08:38:03] it is designed to reflect CPU
[08:38:34] so any ASIC for it = CPU in essence
[08:39:04] of course there are still some things in regular CPU that can be thrown away for RandomX
[08:40:20] uncore parts are not used, but those will use very little power
[08:40:37] except for memory controller
[08:41:09] I'm just surprised sometimes, ok? let me ask: have you designed or taped out an asic before? isn't it risky to make assumptions about things that are largely unknown?
[08:41:23] I would worry
[08:41:31] that I get something wrong...
[08:41:44] but I also worry like crazy that CNv4 will blow up, where you guys seem to be relaxed
[08:42:06] I didn't want to bring up anything RandomX because CNv4 is such a nailbiter... :)
[08:42:15] how do you guys know you don't have asics in a week or two?
[08:42:38] we don't have experience with ASIC design, but RandomX is simply designed to exactly fit CPU capabilities, which is the best you can do anyways
[08:43:09] similar as ProgPoW did with GPUs
[08:43:14] some people say they want to do asic-resistance only until the vast majority of coins has been issued
[08:43:21] that's at least reasonable
[08:43:43] yeah but progpow totally will not work as advertised :)
[08:44:08] yeah, I've seen that comment about progpow a few times already
[08:44:11] which is no surprise if you know it's just a random sales story to sell a few more GPUs
[08:44:13] RandomX is not permanent, we are expecting to switch to ASIC friendly in a few years if possible
[08:44:18] yes
[08:44:21] that makes sense
[08:44:40] linzhi-sonia: how so? will it break or will it be asic-able with decent performance gains?
[08:44:41] are you happy with CNv4 so far?
[08:45:10] ah, long story. progpow is a masterpiece of deception, let's not get into it here.
[08:45:21] if you know chip marketing it makes more sense
[08:45:24] linzhi-sonia: So far? lol! a bit early to tell, don't you think?
[08:45:35] the diff is coming down
[08:45:41] first few hours looked scary
[08:45:43] I remain skeptical: I only see ASICs being reasonable if they are already as ubiquitous as smartphones
[08:45:46] yes, so far so good
[08:46:01] we kbew the diff would not come down ubtil affter block 75
[08:46:10] yes
[08:46:22] but first few hours it looks like only 5% hashrate left
[08:46:27] looked
[08:46:29] now it's better
[08:46:51] the next worry is: when will "unexplainable" hashrate come back?
[08:47:00] you hope 2-3 months? more?
[08:47:05] so give it another couple of days. will probably overshoot to the downside, and then rise a bit as miners get updated and return
[08:47:22] 3 months minimum turnaround, yes
[08:47:28] nah
[08:47:36] don't underestimate asicmakers :)
[08:47:54] you guys don't get #1 priority on chip fabs
[08:47:56] 3 months = 90 days. do you know what is happening in those 90 days exactly? I'm pretty sure you don't. same thing as before.
[08:48:13] we don't do any secret chips btw
[08:48:21] 3 months assumes they had a complete design ready to go, and added the last minute change in 1 day
[08:48:24] do you know who is behind the hashrate that is now bricked?
[08:48:27] innosilicon?
[08:48:34] hyc: no no, and no. :)
[08:48:44] hyc: have you designed or taped out a chip before?
[08:48:51] yes, many years ago
[08:49:10] then you should know that 90 days is not a fixed number
[08:49:35] sure, but like I said, other makers have greater demand
[08:49:35] especially not if you can prepare, if you just have to modify something, or you have more programmability in the chip than some people assume
[08:50:07] we are chipmakers, we would never dare to do what you guys are doing with CNv4 :) but maybe that just means you are cooler!
[08:50:07] and yes, programmability makes some aspect of turnaround easier
[08:50:10] all fine
[08:50:10] I hope it works!
[08:50:28] do you know who is behind the hashrate that is now bricked?
[08:50:29] inno?
[08:50:41] we suspect so, but have no evidence
[08:50:44] maybe we can try to find them, but we cannot spend too much time on this
[08:50:53] it's probably not so much of a secret
[08:51:01] why should it be, right?
[08:51:10] devs want this cat-and-mouse game? devs get it...
[08:51:35] there was one leak saying it's innosilicon
[08:51:36] so you think 3 months, ok
[08:51:43] inno is cool
[08:51:46] good team
[08:51:49] IP design house
[08:51:54] in Wuhan
[08:52:06] they send their people to conferences with fake biz cards :)
[08:52:19] pretending to be other companies?
[08:52:26] sure
[08:52:28] ha ha
[08:52:39] so when we see them, we look at whatever card they carry and laugh :)
[08:52:52] they are perfectly suited for secret mining games
[08:52:59] they made at most $6 million in 2 months of mining, so I wonder if it was worth it
[08:53:10] yeah. no way to know
[08:53:15] but it's good that you calculate!
[08:53:24] this is all about cost/benefit
[08:53:25] then you also understand - imagine the value of XMR goes up 5x, 10x
[08:53:34] that whole "asic resistance" thing will come down like a house of cards
[08:53:41] I would imagine they sell immediately
[08:53:53] the investor may fully understand the risk
[08:53:57] the buyer
[08:54:13] it's not healthy, but that's another discussion
[08:54:23] so mid-June
[08:54:27] let's see
[08:54:49] I would be susprised if CNv4 ASICs show up at all
[08:54:56] surprised*
[08:54:56] why?
[08:55:05] is only an economic question
[08:55:12] yeah should be interesting. FPGAs will be near their limits as well
[08:55:16] unless XMR goes up a lot
[08:55:19] no, not *only*. it's also a technology question
[08:55:44] you believe CNv4 is "asic resistant"? which feature?
[08:55:53] it's not
[08:55:59] cnv4 = Rabdomx ?
[08:56:03] no
[08:56:07] cnv4=cryptinight/r
[08:56:11] ah
[08:56:18] CNv4 is the one we have now, I think
[08:56:21] since yesterday
[08:56:30] it's plenty enough resistant for current XMR price
[08:56:45] that may be, yes!
[08:56:55] I look at daily payouts. XMR = ca. 100k USD / day
[08:57:03] it can hold until October, but it's not asic resistant
[08:57:23] well, last 24h only 22,442 USD :)
[08:57:32] I think 80 h/s per watt ASICs are possible for CNv4
[08:57:38] linzhi-sonia where do you produce your chips? TSMC?
[08:57:44] I'm cruious how you would expect to build a randomX ASIC that outperforms ARM cores for efficiency, or Intel cores for raw speed
[08:57:48] curious
[08:58:01] yes, tsmc
[08:58:21] Our team did the world's first bitcoin asic, Avalon
[08:58:25] and upcoming 2nd gen Ryzens (64-core EPYC) will be a blast at RandomX
[08:58:28] designed and manufactured
[08:58:53] still being marketed?
[08:59:03] linzhi-sonia: do you understand what xmr wants to achieve, community-wise?
[08:59:14] Avalon? as part of Canaan Creative, yes I think so.
[08:59:25] there's not much interesting oing on in SHA256
[08:59:29] Inge-: I would think so, but please speak
[08:59:32] hyc: yes
[09:00:28] linzhi-sonia: i am curious to hear your thoughts. I am fairly new to this space myself...
[09:00:51] oh
[09:00:56] we are grandpas, and grandmas
[09:01:36] yet I have no problem understanding why ASICS are currently reviled.
[09:01:48] xmr's main differentiators to, let's say btc, are anonymity and fungibility
[09:01:58] I find the client terribly slow btw
[09:02:21] and I think the asic-forking since last may is wrong, doesn't create value and doesn't help with the project objectives
[09:02:25] which "the client" ?
[09:02:52] Monero GUI client maybe
[09:03:12] MacOS, yes
[09:03:28] What exactly is slow?
[09:03:30] linzhi-sonia: I run my own node, and use the CLI and Monerujo. Have not had issues.
[09:03:49] staying in sync
[09:03:49] linzhi-sonia: decentralization is also a key principle
[09:03:56] one that Bitcoin has failed to maintain
[09:04:39] hmm
[09:05:00] looks fairly decentralized to me. decentralization is the result of 3 goals imo: resilient, trustless, permissionless
[09:05:28] don't ask a hardware maker about physical decentralization. that's too ideological. we focus on logical decentralization.
[09:06:11] physical decentralization is important. with bulk of bitnoin mining centered on Chinese hydroelectric dams
[09:06:19] have you thought about including block data in the PoW?
[09:06:41] yes, of course.
[09:07:39] is that already in an algo?
[09:08:10] hyc: about "centered on chinese hydro" - what is your source? the best paper I know is this: https://coinshares.co.uk/wp-content/uploads/2018/11/Mining-Whitepaper-Final.pdf
[09:09:01] linzhi-sonia: do you mine on your ASICs before you sell them?
[09:09:13] besides testing of course
[09:09:45] that paper puts Chinese btc miners at 60% max
[09:10:05] tevador: I think everybody learned that that is not healthy long-term!
[09:10:16] because it gives the chipmaker a cost advantage over its own customers
[09:10:33] and cost advantage leads to centralization (physical and logical)
[09:10:51] you guys should know who finances progpow and why :)
[09:11:05] but let's not get into this, ha ha. want to keep the channel civilized. right OhGodAGirl ? :)
[09:11:34] tevador: so the answer is no! 100% and definitely no
[09:11:54] that "self-mining" disease was one of the problems we have now with asics, and their bad reputation (rightfully so)
[09:13:08] I plan to write a nice short 2-page paper or so on our chip design process. maybe it's interesting to some people here.
[09:13:15] basically the 5 steps I mentioned before, from math to physical
[09:13:32] linzhi-sonia: the paper you linked puts 48% of bitcoin mining in Sichuan. the total in China is much more than 60%
[09:13:38] need to run it by a few people to fix bugs, will post it here when published
[09:14:06] hyc: ok! I am just sharing the "best" document I know today. it definitely may be wrong and there may be a better one now.
[09:14:18] hyc: if you see some reports, please share
[09:14:51] hey I am really curious about this: where is a PoW algo that puts block data into the PoW?
[09:15:02] the previous paper I read is from here http://hackingdistributed.com/2018/01/15/decentralization-bitcoin-ethereum/
[09:15:38] hyc: you said that already exists? (block data in PoW)
[09:15:45] it would make verification harder
[09:15:49] linzhi-sonia: https://the-eye.eu/public/Books/campdivision.com/PDF/Computers%20General/Privacy/bitcoin/meh/hashimoto.pdf
[09:15:51] but for chips it would be interesting
[09:15:52] we discussed the possibility about a year ago https://www.reddit.com/Monero/comments/8bshrx/what_we_need_to_know_about_proof_of_work_pow/
[09:16:05] oh good links! thanks! need to read...
[09:16:06] I think that paper by dryja was original
[09:17:53] since we have a nice flow - second question I'm very curious about: has anyone thought about in-protocol rewards for other functions?
[09:18:55] we've discussed micropayments for wallets to use remote nodes
[09:18:55] you know there is a lot of work in other coins about STARK provers, zero-knowledge, etc. many of those things very compute intense, or need to be outsourced to a service (zether). For chipmakers, in-protocol rewards create an economic incentive to accelerate those things.
[09:19:50] whenever there is an in-protocol reward, you may get the power of ASICs doing something you actually want to happen
[09:19:52] it would be nice if there was some economic reward for running a fullnode, but no one has come up with much more than that afaik
[09:19:54] instead of fighting them off
[09:20:29] you need to use asics, not fight them. that's an obvious thing to say for an asicmaker...
[09:20:41] in-protocol rewards can be very powerful
[09:20:50] like I said before - unless the ASICs are so useful they're embedded in every smartphone, I dont see them being a positive for decentralization
[09:21:17] if they're a separate product, the average consumer is not going to buy them
[09:21:20] now I was talking about speedup of verifying, signing, proving, etc.
[09:21:23] they won't even know what they are
[09:22:07] if anybody wants to talk about or design in-protocol rewards, please come talk to us
[09:22:08] the average consumer also doesn't use general purpose hardware to secure blockchains either
[09:22:14] not just for PoW, in fact *NOT* for PoW
[09:22:32] it requires sw/hw co-design
[09:23:10] we are in long-term discussions/collaboration over this with Ethereum, Bitcoin Cash. just talk right now.
[09:23:16] this was recently published though suggesting more uptake though I guess https://btcmanager.com/college-students-are-the-second-biggest-miners-of-cryptocurrency/
[09:23:29] I find it pretty hard to believe their numbers
[09:24:03] well
[09:24:09] sorry, original article: https://www.pcmag.com/news/366952/college-kids-are-using-campus-electricity-to-mine-crypto
[09:24:11] just talk, no? rumors
[09:24:18] college students are already more educated than the average consumer
[09:24:29] we are not seeing many such customers anymore
[09:24:30] it's data from cisco monitoring network traffic
[09:24:33] and they're always looking for free money
[09:24:48] of course anyone with "free" electricity is inclined to do it
[09:24:57] but look at the rates, cannot make much money
[09:26:06] Ethereum is a bloated collection of bugs wrapped in a UI. I suppose they need all the help they can get
[09:26:29] Bitcoin Cash ... just another get rich quick scheme
[09:26:38] hmm :)
[09:26:51] I'll give it back to you, ok? ha ha. arrogance comes before the fall...
[09:27:17] maye we should have a little fun with CNv4 mining :)
[09:27:25] ;)
[09:27:38] come on. anyone who has watched their track record... $75M lost in ETH at DAO hack
[09:27:50] every smart contract that comes along is just waiting for another hack
[09:27:58] I just wanted to throw out the "in-protocol reward" thing, maybe someone sees the idea and wants to cowork. maybe not. maybe it's a stupid idea.
[09:29:18] linzhi-sonia: any thoughts on CN-GPU?
[09:29:55] CN-GPU has one positive aspect - it wastes chip area to implement all 18 hash algorithms
[09:30:19] you will always hear roughly the same feedback from me:
[09:30:52] "This algorithm very different, it heavy use floating point operations to hurt FPGAs and general purpose CPUs"
[09:30:56] the problem is, if it's profitable for people to buy ASIC miners and mine, it's always more profitable for the manufacturer to not sell and mine themselves
[09:31:02] "hurt"
[09:31:07] what is the point of this?
[09:31:15] it totally doesn't work
[09:31:24] you are hurting noone, just demonstrating lack of ability to think
[09:31:41] what is better: algo designed for chip, or chip designed for algo?
[09:31:43] fireice does it on daily basis, CN-GPU is a joke
[09:31:53] tevador: that's not really true, especially in a market with such large price fluctuations as cryptocurrency
[09:32:12] it's far less risky to sell miners than mine with them and pray that price doesn't crash for next six months
[09:32:14] I think it's great that crypto has a nice group of asicmakers now, hw & sw will cowork well
[09:32:36] jwinterm yes, that's why they premine them and sell after
[09:32:41] PoW is about being thermodynamically and cryptographically provable
[09:32:45] premining with them is taking on that risk
[09:32:49] not "fork when we think there are asics"
[09:32:51] business is about risk minimization
[09:32:54] that's just fear-driven
[09:33:05] Inge-: that's roughly the feedback
[09:33:24] I'm not saying it hasn't happened, but I think it's not so simple as saying "it always happens"
[09:34:00] jwinterm: it has certainly happened on BTC. and also on XMR.
[09:34:19] ironically, please think about it: these kinds of algos indeed prove the limits of the chips they were designed for. but they don't prove that you cannot implement the same algo differently! cannot!
[09:34:26] Risk minimization is not starting a business at all.
[09:34:34] proof-of-gpu-limit. proof-of-cpu-limit.
[09:34:37] imagine you have a money printing machine, would you sell it?
[09:34:39] proves nothing for an ASIC :)
[09:35:05] linzhi-sonia: thanks. I dont think anyone believes you can't make a more efficient cn-gpu asic than a gpu - but that it would not be orders of magnitude faster...
[09:35:24] ok
[09:35:44] like I say. these algos are, that's really ironic, designed to prove the limitatios of a particular chip in mind of the designer
[09:35:50] exactly the wrong way round :)
[09:36:16] like the cache size in RandomX :)
[09:36:18] beautiful
[09:36:29] someone looked at GPU designs
[09:37:31] linzhi-sonia can you elaborate? Cache size in RandomX was selected to fit CPU cache
[09:37:52] yes
[09:38:03] too large for GPU
[09:38:11] as I said, we are designing the algorithm to exactly fit CPU capabilities, I do not claim an ASIC cannot be more efficient
[09:38:16] ok!
[09:38:29] when will you do the audit?
[09:38:35] will the results be published in a document or so?
[09:38:37] I claim that single-chip ASIC is not viable, though
[09:39:06] you guys are brave, noone disputes that. 3 anti-asic hardforks now!
[09:39:18] 4th one coming
[09:39:31] 3 forks were done not only for this
[09:39:38] they had scheduled updates in the first place
[09:48:10] Monero is the #1 anti-asic fighter
[09:48:25] Monero is #1 for a lot of reasons ;)
[09:48:40] It's the coin with the most hycs.
[09:48:55] mooooo
[09:59:06] sneaky integer overflow, bug squished
[10:38:00] p0nziph0ne ([email protected]/vpn/privateinternetaccess/p0nziph0ne) has joined #monero-pow
[11:10:53] The convo here is wild
[11:12:29] it's like geo-politics at the intersection of software and hardware manufacturing for thermoeconomic value.
[11:13:05] ..and on a Sunday.
[11:15:43] midipoet: hw and sw should work together and stop silly games to devalue each other. to outsiders this is totally not attractive.
[11:16:07] I appreciate the positive energy here to try to listen, learn, understand.
[11:16:10] that's a start
[11:16:48] <-- p0nziph0ne ([email protected]/vpn/privateinternetaccess/p0nziph0ne) has quit (Quit: Leaving)
[11:16:54] we won't do silly mining against xmr "community" wishes, but not because we couldn'd do it, but because it's the wrong direction in the long run, for both sides
[11:18:57] linzhi-sonia: I agree to some extent. Though, in reality, there will always be divergence between social worlds. Not every body has the same vision of the future. Reaching societal consensus on reality tomorrow is not always easy
[11:20:25] absolutely. especially at a time when there is so much profit to be made from divisiveness.
[11:20:37] someone will want to make that profit, for sure
[11:24:32] Yes. Money distorts.
[11:24:47] Or wealth...one of the two
[11:26:35] Too much physical money will distort rays of light passing close to it indeed.
submitted by jwinterm to Monero [link] [comments]

BitcoinSOV (BSOV) Trading is Coming to Resfinex on 08 Feb 2020.

Dear Users,
We are pleased to announce that BitcoinSOV (BSOV) will be listed on 08th Feb 2020.
What is BitcoinSOV (BSOV)?
BitcoinSOV is a 100% community-driven cryptocurrency, and does not rely on centralized decision makers or traditional power structures to survive. This deflationary grassroots movement is built from the bottom-up, and is fully reliant on people like you to build it. We use non-violent methods of action — we fight for financial independence, and freedom from inflation.
What time will funding and trading start?
Trading Pairs
Confirmations required before deposits credit
Fees
BSOV stats
Trade with caution
Thanks for your support,
Resfinex Team
Invest with caution
Listing an asset or token for trade is not a recommendation to buy, sell, or participate in the associated network. Do your own research and invest at your own risk.
submitted by resfinex_official to u/resfinex_official [link] [comments]

Mining for Profitability - Horizen (formerly ZenCash) Thanks Early GPU Miners

Mining for Profitability - Horizen (formerly ZenCash) Thanks Early GPU Miners
Thank you for inviting Horizen to the GPU mining AMA!
ZEN had a great run of GPU mining that lasted well over a year, and brought lots of value to the early Zclassic miners. It is mined using Equihash protocol, and there have been ASIC miners available for the algorithm since about June of 2018. GPU mining is not really profitable for Horizen at this point in time.
We’ve got a lot of miners in the Horizen community, and many GPU miners also buy ASIC miners. Happy to talk about algorithm changes, security, and any other aspect of mining in the questions below. There are also links to the Horizen website, blog post, etc. below.
So, if I’m not here to ask you to mine, hold, and love ZEN, what can I offer? Notes on some of the lessons I’ve learned about maximizing mining profitability. An update on Horizen - there is life after moving on from GPU mining. As well as answering your questions during the next 7 days.
_____________________________________________________________________________________________________

Mining for Profitability - Horizen (formerly ZenCash) Thanks Early GPU Miners

Author: Rolf Versluis - co-founder of Horizen

In GPU mining, just like in many of the activities involved with Bitcoin and cryptocurrencies, there is both a cycle and a progression. The Bitcoin price cycle is fairly steady, and by creating a personal handbook of actions to take during the cycle, GPU miners can maximize their profitability.
Maximizing profitability isn't the only aspect of GPU mining that is important, of course, but it is helpful to be able to invest in new hardware, and be able to have enough time to spend on building and maintaining the GPU miners. If it was a constant process that also involved losing money, then it wouldn't be as much fun.

Technology Progression

For a given mining algorithm, there is definitely a technology progression. We can look back on the technology that was used to mine Bitcoin and see how it first started off as Central Processing Unit (CPU) mining, then it moved to Graphical Processing Unit (GPU) mining, then Field Programmable Gate Array (FPGA), and then Application Specific Integrated Circuit (ASIC).
Throughout this evolution we have witnessed a variety of unsavory business practices that unfortunately still happen on occasion, like ASIC Miner manufacturers taking pre-orders 6 months in advance, GPU manufacturers creating commercial cards for large farms that are difficult for retail customers to secure and ASIC Miner manufacturers mining on gear for months before making it available for sale.
When a new crypto-currency is created, in many cases a new mining algorithm is created also. This is important, because if an existing algorithm was used, the coin would be open to a 51% attack from day one, and may not even be able to build a valid blockchain.
Because there's such a focus on profitable software, developers for GPU mining applications are usually able to write a mining application fairly rapidly, then iterate it to the limit of current GPU technology. If it looks like a promising new cryptocurrency, FPGA stream developers and ASIC Hardware Developers start working on their designs at the same time.
The people who create the hashing algorithms run by the miners are usually not very familiar with the design capabilities of Hardware manufacturers. Building application-specific semiconductors is an industry that's almost 60 years old now, and FPGA’s have been around for almost 35 years. This is an industry that has very experienced engineers using advanced design and modeling tools.
Promising cryptocurrencies are usually ones that are deploying new technology, or going after a big market, and who have at least a team of talented software developers. In the best case, the project has a full-stack business team involving development, project management, systems administration, marketing, sales, and leadership. This is the type of project that attracts early investment from the market, which will drive the price of the coin up significantly in the first year.
For any cryptocurrency that's a worthwhile investment of time, money, and electricity for the hashing, there will be a ASIC miners developed for it. Instead of fighting this technology progression, GPU miners may be better off recognizing it as inevitable, and taking advantage of the cryptocurrency cycle to maximize GPU mining profitability instead.

Cryptocurrency Price Cycle

For quality crypto projects, in addition to the one-way technology progression of CPU -> GPU -> FPGA -> ASIC, there is an upward price progression. More importantly, there is a cryptocurrency price cycle that oscillates around an overall upgrade price progression. Plotted against time, a cycle with an upward progressions looks like a sine wave with an ever increasing average value, which is what we see so far with the Bitcoin price.

Cryptocurrency price cycle and progression for miners
This means mining promising new cryptocurrencies with GPU miners, holding them as the price rises, and being ready to sell a significant portion in the first year. Just about every cryptocurrency is going to have a sharp price rise at some point, whether through institutional investor interest or by being the target of a pump-and-dump operation. It’s especially likely in the first year, while the supply is low and there is not much trading volume or liquidity on exchanges.
Miners need to operate in the world of government money, as well as cryptocurrency. The people who run mining businesses at some point have to start selling their mining proceeds to pay the bills, and to buy new equipment as the existing equipment becomes obsolete. Working to maximize profitability means more than just mining new cryptocurrencies, it also means learning when to sell and how to manage money.

Managing Cash for Miners

The worst thing that can happen to a business is to run out of cash. When that happens, the business usually shuts down and goes into bankruptcy. Sometimes an investor comes in and picks up the pieces, but at the point the former owners become employees.
There are two sides to managing cash - one is earning it, the other is spending it, and the cryptocurrency price cycle can tell the GPU miner when it is the best time to do certain things. A market top and bottom is easy to recognize in hindsight, and harder to see when in the middle of it. Even if a miner is able to recognize the tops and bottoms, it is difficult to act when there is so much hype and positivity at the top of the cycle, and so much gloom and doom at the bottom.
A decent rule of thumb for the last few cycles appears to be that at the top and bottom of the cycle BTC is 10x as expensive compared to USD as the last cycle. Newer crypto projects tend to have bigger price swings than Bitcoin, and during the rising of the pricing cycle there is the possibility that an altcoin will have a rise to 100x its starting price.
Taking profits from selling altcoins during the rise is important, but so is maintaining a reserve. In order to catch a 100x move, it may be worth the risk to put some of the altcoin on an exchange and set a very high limit order. For the larger cryptocurrencies like Bitcoin it is important to set trailing sell stops on the way up, and to not buy back in for at least a month if a sell stop gets triggered. Being able to read price charts, see support and resistance areas for price, and knowing how to set sell orders are an important part of mining profitability.

Actions to Take During the Cycle

As the cycle starts to rise from the bottom, this is a good time to buy mining hardware - it will be inexpensive. Also to mine and buy altcoins, which are usually the first to see a price rise, and will have larger price increases than Bitcoin.
On the rise of the cycle, this is a good time to see which altcoins are doing well from a project fundamentals standpoint, and which ones look like they are undergoing accumulation from investors.
Halfway through the rise of the cycle is the time to start selling altcoins for the larger project cryptos like Bitcoin. Miners will miss some of the profit at the top of the cycle, but will not run out of cash by doing this. This is also the time to stop buying mining hardware. Don’t worry, you’ll be able to pick up that same hardware used for a fraction of the price at the next bottom.
As the price nears the top of the cycle, sell enough Bitcoin and other cryptocurrencies to meet the following projected costs:
  • Mining electricity costs for the next 12 months
  • Planned investment into new miners for the next cycle
  • Additional funds needed for things like supporting a family or buying a Lambo
  • Taxes on all the capital gains from the sale of cryptocurrencies
It may be worth selling 70-90% of crypto holdings, maintaining a reserve in case there is second upward move caused by government bankruptcies. But selling a large part of the crypto is helpful to maintaining profitability and having enough cash reserves to make it through the bottom part of the next cycle.
As the cycle has peaked and starts to decline, this is a good time to start investing in mining facilities and other infrastructure, brush up on trading skills, count your winnings, and take some vacation.
At the bottom of the cycle, it is time to start buying both used and new mining equipment. The bottom can be hard to recognize.
If you can continue to mine all the way through bottom part of the cryptocurrency pricing cycle, paying with the funds sold near the top, you will have a profitable and enjoyable cryptocurrency mining business. Any cryptocurrency you are able to hold onto will benefit from the price progression in the next higher cycle phase.

An Update on Horizen - formerly ZenCash

The team at Horizen recognizes the important part that GPU miners played in the early success of Zclassic and ZenCash, and there is always a welcoming attitude to any of ZEN miners, past and present. About 1 year after ZenCash launched, ASIC miners became available for the Equihash algorithm. Looking at a chart of mining difficulty over time shows when it was time for GPU miners to move to mining other cryptocurrencies.

Horizen Historical Block Difficulty Graph
Looking at the hashrate chart, it is straightforward to see that ASIC miners were deployed starting June 2018. It appears that there was a jump in mining hashrate in October of 2017. This may have been larger GPU farms switching over to mine Horizen, FPGA’s on the network, or early version of Equihash ASIC miners that were kept private.
The team understands the importance of the cryptocurrency price cycle as it affects the funds from the Horizen treasury and the investments that can be made. 20% of each block mined is sent to the Horizen non-profit foundation for use to improve the project. Just like miners have to manage money, the team has to decide whether to spend funds when the price is high or convert it to another form in preparation for the bottom part of the cycle.
During the rise and upper part of the last price cycle Horizen was working hard to maximize the value of the project through many different ways, including spending on research and development, project management, marketing, business development with exchanges and merchants, and working to create adoption in all the countries of the world.
During the lower half of the cycle Horizen has reduced the team to the essentials, and worked to build a base of users, relationships with investors, exchanges, and merchants, and continue to develop the higher priority software projects. Lower priority software development, going to trade shows, and paying for business partnerships like exchanges and applications have all been completely stopped.
Miners are still a very important part of the Horizen ecosystem, earning 60% of the block reward. 20% goes to node operators, with 20% to the foundation. In the summer of 2018 the consensus algorithm was modified slightly to make it much more difficult for any group of miners to perform a 51% attack on Horizen. This has so far proven effective.
The team is strong, we provide monthly updates on a YouTube live stream on the first Wednesday of each month where all questions asked during the stream are addressed, and our marketing team works to develop awareness of Horizen worldwide. New wallet software was released recently, and it is the foundation application for people to use and manage their ZEN going forward.
Horizen is a Proof of Work cryptocurrency, and there is no plan to change that by the current development team. If there is a security or centralization concern, there may be change to the algorithm, but that appears unlikely at this time, as the hidden chain mining penalty looks like it is effective in stopping 51% attacks.
During 2019 and 2020 the Horizen team plans to release many new software updates:
  • Sidechains modification to main software
  • Sidechain Software Development Kit
  • Governance and Treasury application running on a sidechain
  • Node tracking and payments running on a sidechain
  • Conversion from blockchain to a Proof of Work BlockDAG using Equihash mining algorithm
After these updates are working well, the team will work to transition Horizen over to a governance model where major decisions and the allocation of treasury funds are done through a form of democratic voting. At this point all the software developed by Horizen is expected to be open source.
When the governance is transitioned, the project should be as decentralized as possible. The goal of decentralization is to enable resilience and preventing the capture of the project by regulators, government, criminal organizations, large corporations, or a small group of individuals.
Everyone involved with Horizen can be proud of what we have accomplished together so far. Miners who were there for the early mining and growth of the project played a large part in securing the network, evangelizing to new community members, and helping to create liquidity on new exchanges. Miners are still a very important part of the project and community. Together we can look forward to achieving many new goals in the future.

Here are some links to find out more about Horizen.
Horizen Website – https://horizen.global
Horizen Blog – https://blog.horizen.global
Horizen Reddit - https://www.reddit.com/Horizen/
Horizen Discord – https://discord.gg/SuaMBTb
Horizen Github – https://github.com/ZencashOfficial
Horizen Forum – https://forum.horizen.global/
Horizen Twitter – https://twitter.com/horizenglobal
Horizen Telegram – https://t.me/horizencommunity
Horizen on Bitcointalk – https://bitcointalk.org/index.php?topic=2047435.0
Horizen YouTube Channel – https://www.youtube.com/c/Horizen/
Buy or Sell Horizen
Horizen on CoinMarketCap – https://coinmarketcap.com/currencies/zencash/

About the Author:

Rolf Versluis is Co-Founder and Executive Advisor of the privacy oriented cryptocurrency Horizen. He also operates multiple private cryptocurrency mining facilities with hundreds of operational systems, and has a blog and YouTube channel on crypto mining called Block Operations.
Rolf applies his engineering background as well as management and leadership experience from running a 60 person IT company in Atlanta and as a US Navy nuclear submarine officer operating out of Hawaii to help grow and improve the businesses in which he is involved.
_____________________________________________________________________________________________
Thank you again for the Ask Me Anything - please do. I'll be checking the post and answering questions actively from 28 Feb to 6 Mar 2019 - Rolf
submitted by Blockops to gpumining [link] [comments]

The Problem with PoW

The Problem with PoW
Miners have always had it rough..
"Frustrated Miners"

The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.

Hashrates and Hardware Types

While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.

2 Guys 1 ASIC

One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.

Implications of Centralization

This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.

The Rise of FPGAs

With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called VerusCoin and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.

All is not lost thanks to.. um.. Technology?

Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"

If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.

In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to CryptoCurrency [link] [comments]

Profitable Crypto Mining: ASIC vs GPU, Which One Is Better?

Profitable Crypto Mining: ASIC vs GPU, Which One Is Better?
If you’re new to mining you probably have multiple questions running through your head right now. Good news is that it gets easier with time, assuming that you do your homework and research, and we will try to help you out.
One of the common questions is whether one should choose GPU or ASIC mining and we definitely have some advice on that topic.
When we’re considering classic POW mining we can quickly rule out CPU hardware for not being efficient and FPGA hardware because of its high costs. This leaves you with ASIC and GPU to choose from.

https://preview.redd.it/igev3y4v8pv31.png?width=1920&format=png&auto=webp&s=2a0c9271fc36252181d086e74101d13875619c80

Buying Mining Equipment

Let’s get things straight — you won’t be able to buy ASIC devices in any of you local electronic shops, even in the biggest ones. There are two ways you can get this hardware: buying it online which shouldn’t be a problem these days unless that’s the newest model you’re after. Second option is to find a local company that sells ASIC equipment.
Also, you can try to purchase the equipment directly from the manufacture company, however, mind the huge customs and delivery fees if the company is located abroad.
It is highly recommended to test ASICs before buying them to make sure the equipment works properly.
GPU or graphics cards and other equipment that you will need to build your very own mining farm can be easily purchased at a regular computer store. The only problem you may have is getting the right set of hardware, so make sure to come prepared.
When buying a used (second-hand) graphics card don’t forget to test it.
What’s better?
If you’re not into hardware and have no clue how to set up a farm by yourself buying ASIC equipment would be a better option as you won’t need to build anything yourself.

Warranty Policy

In general, an official warranty policy for ASIC hardware is up to 180 days since the equipment was shipped to the buyer. When the seller is confident about the quality of their equipment, they can offer their personal 1 month warranty.
When you’re buying computer hardware in most of the cases you are getting full 2 year warranty policy including exchange or repairments of the equipment.
What’s better?
Warranty policy is especially important when you have no chance to check the equipment yourself or when you’re buying large inventory of it. Also, if you plan to go with overclocking, you will probably need a decent warranty as well.
We need to add that when you’re using the equipment accordingly and conduct regular maintenance both ASIC and GPU can work past the warranty period.

Setting Up Process

With ASICs it’s simple: you plug and connect it, pick a pool to join and start mining right away.
With GPU, it’s a little complicated. First, you need to build your farm. You will need a framework, motherboard with installed CPU and cooling, storage unit, power supply, risers and video cards. If you have no experience with assembling computer hardware you’re gonna need to save some time and prepare to put extra effort. Once your rig is ready you will have to install OS and optimize it which is usually even harder than setting up a rig. But luckily we’ve got a solution for that. CoinFly can do the work for you and help you with setting up and optimizing your equipment.
What’s better?
Although ASICs are very easy, you shouldn’t quickly give up on GPU mining. If assembling computer hardware is not a big problem for you, CoinFly will help you with setting it up.

Maintenance

ASIC equipment won’t give you too much trouble: it’s safe, stable, and doesn’t require any special knowledge. Maintenance includes cleaning off dust and oiling the fans.
When dealing with rigs, you will have to work a little harder and study the basics about at least graphics cards’ temperatures and operational frequency. A stable workflow depends heavily on the software and as it has a tendency to fail, it could become a problem. Unless you’re using CoinFly — our system will notify you in case of emergency so you can tune your equipment online.
What’s better?
Once again, when it comes to maintaining ASICs are almost trouble-free. GPU rigs are a bit tricky but when using the right tools like CoinFly to monitor their work, it can serve you just fine.

The Noise

ASICs are loud: when you’re in a room with a working ASIC you’re gonna need to shout, so people can hear you.
GPU farms have no such problem. Some of them are almost silent and that doesn’t affect the cooling process at all.
What’s better?
Maybe the level of noise your equipment makes was not the first issue on your list but we recommend you to consider it. ASICs are suitable only for the commercial and industrial premises.

Mining

ASICs can work with only one algorithm and mine one or several types of cryptocurrencies and are perfect for mining Bitcoin and its forks.
GPU rigs are universal: you can mine a huge variety of coins if you set your miner right.
What’s better?
If you want to mine Bitcoin, you gotta go with ASIC. But think again if that’s what you’re really after. After all, you can choose mining any altcoin that you’d like with your GPU rig and then simply exchange it to BTC. And if you’re lucky enough to mine a coin that will do good ASICs do not give you that choice, however, their mining capability is higher.

Relevance of the Equipment

ASICs are quickly getting out of date as the new models come along. Back in the day, the new versions used to come out every half a year and they were 10 times more efficient. In general, you need to change your ASIC hardware every year.
GPU equipment can perfectly serve you for 2 to 3 years and if you wish to sell the graphics card afterwards that wouldn’t be a problem either.
What’s better?
In terms of relevance, it’s probably reasonable to go with the GPU.

Return on Investment

In the long run, the profitableness of ASICs is higher but because the new models are being released quite frequently you cannot expect huge profits. It is always important to do your research and get the most relevant equipment.
GPU hardware will take its time to pay you back but it also depends if you manage to find the right coin to mine that will eventually increase your profits.
What’s better?
ASIC mining is definitely a good option for those who don’t want to constantly monitor the crypto market.
But in the case that you’re interested in what’s happening in the crypto space and you also have time to do your own research, the GPU farm would the better choice. If you’re not willing to spend your efforts on that, CoinFly Autopilot mode will help you mine the most profitable coin on the market automatically.

Conclusion

ASICs are great for people who can provide a non-residential space for mining and not willing to spend too much time and effort for setting up the equipment and stay updated with the latest trends in the crypto industry.
GPU rigs are suitable for mining at home and won’t scare away all the crypto and computer enthusiasts. If you’re just starting your mining journey but not sure how to do it, we recommend to register on CoinFly. From setting up your hardware to tuning it online and picking the best coin to mine at the moment — we’ve got you covered!
submitted by coinfly to CoinFly [link] [comments]

Crypto and the Latency Arms Race: Crypto Exchanges and the HFT Crowd

Crypto and the Latency Arms Race: Crypto Exchanges and the HFT Crowd


News by Coindesk: Max Boonen
Carrying on from an earlier post about the evolution of high frequency trading (HFT), how it can harm markets and how crypto exchanges are responding, here we focus on the potential longer-term impact on the crypto ecosystem.
First, though, we need to focus on the state of HFT in a broader context.

Conventional markets are adopting anti-latency arbitrage mechanisms

In conventional markets, latency arbitrage has increased toxicity on lit venues and pushed trading volumes over-the-counter or into dark pools. In Europe, dark liquidity has increased in spite of efforts by regulators to clamp down on it. In some markets, regulation has actually contributed to this. Per the SEC:
“Using the Nasdaq market as a proxy, [Regulation] NMS did not seem to succeed in its mission to increase the display of limit orders in the marketplace. We have seen an increase in dark liquidity, smaller trade sizes, similar trading volumes, and a larger number of “small” venues.”
Why is non-lit execution remaining or becoming more successful in spite of its lower transparency? In its 2014 paper, BlackRock came out in favour of dark pools in the context of best execution requirements. It also lamented message congestion and cautioned against increasing tick sizes, features that advantage latency arbitrageurs. (This echoes the comment to CoinDesk of David Weisberger, CEO of Coinroutes, who explained that the tick sizes typical of the crypto market are small and therefore do not put slower traders at much of a disadvantage.)
Major venues now recognize that the speed race threatens their business model in some markets, as it pushes those “slow” market makers with risk-absorbing capacity to provide liquidity to the likes of BlackRock off-exchange. Eurex has responded by implementing anti-latency arbitrage (ALA) mechanisms in options:
“Right now, a lot of liquidity providers need to invest more into technology in order to protect themselves against other, very fast liquidity providers, than they can invest in their pricing for the end client. The end result of this is a certain imbalance, where we have a few very sophisticated liquidity providers that are very active in the order book and then a lot of liquidity providers that have the ability to provide prices to end clients, but are tending to do so more away from the order book”, commented Jonas Ullmann, Eurex’s head of market functionality. Such views are increasingly supported by academic research.
XTX identifies two categories of ALA mechanisms: policy-based and technology-based. Policy-based ALA refers to a venue simply deciding that latency arbitrageurs are not allowed to trade on it. Alternative venues to exchanges (going under various acronyms such as ECN, ATS or MTF) can allow traders to either take or make, but not engage in both activities. Others can purposefully select — and advertise — their mix of market participants, or allow users to trade in separate “rooms” where undesired firms are excluded. The rise of “alternative microstructures” is mostly evidenced in crypto by the surge in electronic OTC trading, where traders can receive better prices than on exchange.
Technology-based ALA encompasses delays, random or deterministic, added to an exchange’s matching engine to reduce the viability of latency arbitrage strategies. The classic example is a speed bump where new orders are delayed by a few milliseconds, but the cancellation of existing orders is not. This lets market makers place fresh quotes at the new prevailing market price without being run over by latency arbitrageurs.
As a practical example, the London Metal Exchange recently announced an eight-millisecond speed bump on some contracts that are prime candidates for latency arbitrageurs due to their similarity to products trading on the much bigger CME in Chicago.
Why 8 milliseconds? First, microwave transmission between Chicago and the US East Coast is 3 milliseconds faster than fibre optic lines. From there, the $250,000 a month Hibernia Express transatlantic cable helps you get to London another 4 milliseconds faster than cheaper alternatives. Add a millisecond for internal latencies such as not using FPGAs and 8 milliseconds is the difference for a liquidity provider between investing tens of millions in speed technology or being priced out of the market by latency arbitrage.
With this in mind, let’s consider what the future holds for crypto.

Crypto exchanges must not forget their retail roots

We learn from conventional markets that liquidity benefits from a diverse base of market makers with risk-absorption capacity.
Some have claimed that the spread compression witnessed in the bitcoin market since 2017 is due to electronification. Instead, I posit that it is greater risk-absorbing capacity and capital allocation that has improved the liquidity of the bitcoin market, not an increase in speed, as in fact being a fast exchange with colocation such as Gemini has not supported higher volumes. Old-timers will remember Coinsetter, a company that, per the Bitcoin Wiki , “was created in 2012, and operates a bitcoin exchange and ECN. Coinsetter’s CSX trading technology enables millisecond trade execution times and offers one of the fastest API data streams in the industry.” The Wiki page should use the past tense as Coinsetter failed to gain traction, was acquired in 2016 and subsequently closed.
Exchanges that invest in scalability and user experience will thrive (BitMEX comes to mind). Crypto exchanges that favour the fastest traders (by reducing jitter, etc.) will find that winner-takes-all latency strategies do not improve liquidity. Furthermore, they risk antagonising the majority of their users, who are naturally suspicious of platforms that sell preferential treatment.
It is baffling that the head of Russia for Huobi vaunted to CoinDesk that: “The option [of co-location] allows [selected clients] to make trades 70 to 100 times faster than other users”. The article notes that Huobi doesn’t charge — but of course, not everyone can sign up.
Contrast this with one of the most successful exchanges today: Binance. It actively discourages some HFT strategies by tracking metrics such as order-to-trade ratios and temporarily blocking users that breach certain limits. Market experts know that Binance remains extremely relevant to price discovery, irrespective of its focus on a less professional user base.
Other exchanges, take heed.
Coinbase closed its entire Chicago office where 30 engineers had worked on a faster matching engine, an exercise that is rumoured to have cost $50mm. After much internal debate, I bet that the company finally realised that it wouldn’t recoup its investment and that its value derived from having onboarded 20 million users, not from upgrading systems that are already fast and reliable by the standards of crypto.
It is also unsurprising that Kraken’s Steve Hunt, a veteran of low-latency torchbearer Jump Trading, commented to CoinDesk that: “We want all customers regardless of size or scale to have equal access to our marketplace”. Experience speaks.
In a recent article on CoinDesk , Matt Trudeau of ErisX points to the lower reliability of cloud-based services compared to dedicated, co-located and cross-connected gateways. That much is true. Web-based technology puts the emphasis on serving the greatest number of users concurrently, not on serving a subset of users deterministically and at the lowest latency possible. That is the point. Crypto might be the only asset class that is accessible directly to end users with a low number of intermediaries, precisely because of the crypto ethos and how the industry evolved. It is cheaper to buy $500 of bitcoin than it is to buy $500 of Microsoft shares.
Trudeau further remarks that official, paid-for co-location is better than what he pejoratively calls “unsanctioned colocation,” the fact that crypto traders can place their servers in the same cloud providers as the exchanges. The fairness argument is dubious: anyone with $50 can set up an Amazon AWS account and run next to the major crypto exchanges, whereas cheap co-location starts at $1,000 a month in the real world. No wonder “speed technology revenues” are estimated at $1 billion for the major U.S. equity exchanges.
For a crypto exchange, to reside in a financial, non-cloud data centre with state-of-the-art network latencies might ironically impair the likelihood of success. The risk is that such an exchange becomes dominated on the taker side by the handful of players that already own or pay for the fastest communication routes between major financial data centres such as Equinix and the CME in Chicago, where bitcoin futures are traded. This might reduce liquidity on the exchange because a significant proportion of the crypto market’s risk-absorption capacity is coming from crypto-centric funds that do not have the scale to operate low-latency strategies, but might make up the bulk of the liquidity on, say, Binance. Such mom-and-pop liquidity providers might therefore shun an exchange that caters to larger players as a priority.

Exchanges risk losing market share to OTC liquidity providers

While voice trading in crypto has run its course, a major contribution to the market’s increase in liquidity circa 2017–2018 was the risk appetite of the original OTC voice desks such as Cumberland Mining and Circle.
Automation really shines in bringing together risk-absorbing capacity tailored to each client (which is impossible on anonymous exchanges) with seamless electronic execution. In contrast, latency-sensitive venues can see liquidity evaporate in periods of stress, as happened to a well-known and otherwise successful exchange on 26 June which saw its bitcoin order book become $1,000 wide for an extended period of time as liquidity providers turned their systems off. The problem is compounded by the general unavailability of credit on cash exchanges, an issue that the OTC market’s settlement model avoids.
As the crypto market matures, the business model of today’s major cash exchanges will come under pressure. In the past decade, the FX market has shown that retail traders benefit from better liquidity when they trade through different channels than institutional speculators. Systematic internalizers demonstrate the same in equities. This fact of life will apply to crypto. Exchanges have to pick a side: either cater to retail (or retail-driven intermediaries) or court HFTs.
Now that an aggregator like Tagomi runs transaction cost analysis for their clients, it will become plainly obvious to investors with medium-term and long-term horizons (i.e. anyone not looking at the next 2 seconds) that their price impact on exchange is worse than against electronic OTC liquidity providers.
Today, exchange fee structures are awkward because they must charge small users a lot to make up for crypto’s exceptionally high compliance and onboarding costs. Onboarding a single, small value user simply does not make sense unless fees are quite elevated. Exchanges end up over-charging large volume traders such as B2C2’s clients, another incentive to switch to OTC execution.
In the alternative, what if crypto exchanges focus on HFT traders? In my opinion, the CME is a much better venue for institutional takers as fees are much lower and conventional trading firms will already be connected to it. My hypothesis is that most exchanges will not be able to compete with the CME for fast traders (after all, the CBOE itself gave up), and must cater to their retail user base instead.
In a future post, we will explore other microstructures beyond all-to-all exchanges and bilateral OTC trading.
Fiber threads image via Shutterstock
submitted by GTE_IO to u/GTE_IO [link] [comments]

The Problem with PoW


Miners have always had it rough..
"Frustrated Miners"


The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.
Hashrates and Hardware Types
While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.
2 Guys 1 ASIC
One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.
Implications of Centralization
This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.
The Rise of FPGAs
With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called VerusCoin and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.
All is not lost thanks to.. um.. Technology?
Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"
If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.
In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to EtherMining [link] [comments]

Uh, is this legitimate? Butterfly Labs announces ASIC lineup, game changing speeds.

Uh, is this legitimate? Butterfly Labs announces ASIC lineup, game changing speeds. submitted by Thorbinator to Bitcoin [link] [comments]

I own 179 BTC, here is my story

I am not a wealthy person by any means, but Bitcoin has helped.
I discovered Bitcoin via a post on overclock.net on April 27th, 2011. I believe the price was about $1.50/coin then. I read the posts about people mining them, did some research, and immediately started my Radeon card mining them. I had a 4770 back then.
There was an exchange to sell Bitcoins for linden dollars (Second Life currency) and then I could sell those for paypal dollars. Within a day I had proven to my wife that I could make money with this Bitcoin thing. Despite us being in a position where we couldn't even pay our credit cards, I took the $1100 we had and bought 4 5850's, some power supplies, and some cheap craigslist computers. I figured that if this whole Bitcoin thing failed miserably, at least I had some decent computer hardware I could resell and recover most of the cost. I immediately sold one 5850 for greater-than-market value since they were in demand and I needed the money, and started the other 3 mining. At one point, I was mining nearly 8 coins a day. I bought a few more cards as time went on and continued GPU mining for as long as it was viable.
This whole thing saved us financially. I was able to sell the Bitcoins and settle on my unpayable credit card debts. I held on to a few during the crash but managed to sell most of them at $10 or more, fortunately. After that I started saving them, since they were worth so little. I bought some of the early BFL FPGA miners, the ones that were measured in MHashes not GHashes. After mining with those for a while and then selling them to someone who wanted them more than I did, I had more than 450 BTC. I took the plunge and pre-ordered BFL's latest offerings, the 60GH singles, the day they were available, becoming one of the first on the preorder list. Little did I know I would have been much better off just holding those coins...
Regardless, I did eventually receive those singles, and managed to get about 225 BTC out of them before they were no longer worth running. I've been slowly selling the stash as we needed for remodel projects around the house and for miscellaneous expenses, though I finally no longer need to do so, as we've been able to pay off more debts and have more income than expenses each month. Now I've got a nice pile of savings, and I'm hoping to someday be able to use it to buy a better house in a better neighborhood.
I generally don't tell people that I have just about all my liquid assets in Bitcoin, as they would call me crazy. They might be right. But it's a risk I'm willing to take. I do have some equity in my house, and some retirement accounts, but neither is worth more than my BTC stash.
So that's MY story, what's yours?
submitted by bitcoinzzzz to Bitcoin [link] [comments]

The Problem with PoW

The Problem with PoW

Miners have always had it rough..
"Frustrated Miners"


The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.
Hashrates and Hardware Types
While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.
2 Guys 1 ASIC
One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.
Implications of Centralization
This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.
The Rise of FPGAs
With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called VerusCoin and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.
All is not lost thanks to.. um.. Technology?
Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"
If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.
In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to gpumining [link] [comments]

Check out Part 1 of our first Skycoin Official AMA with Synth!

Part 2 of the AMA posted here.
 
What is Skywire? Where does it fit in with Skycoin?
Skycoin is a blockchain application platform. We have multiple coins in the platform (Metallicoin, mdl.life, solarbankers.com, etc). We let people launch their own blockchain applications (including coins).
There are two parts to Skywire. The first part is the Skywire node. The second part is the hardware.
Skywire is one of the first applications we are launching on the Skycoin platform. It is one of our flagship applications that has been in development for several years. Skywire is basically a decentralized ISP on blockchain. It is like Tor, but you are paid to run it. You forward packets for your neighbors and you receive coins You pay coins to other people for forwarding your packets.
So it is like Tor but on blockchain and you are paid for running the network. Also, while Tor is slow, Skywire was designed to be faster than the current internet, instead of slower.
Skywire is a test application for monetizing excess bandwidth. Eventually the software defined networking technology behind Skywire, will allow us to build physical networks (actual mesh nets) that can begin to replace centralized ISPs. However, the current Skywire prototype is still running over the existing internet, but later we will start building out our own hardware.
Skywire is a solution for protecting people’s privacy and is also a solution to net neutrality. If Skycoin can can decentralize the ISPs with blockchain, then we wont have to beg the FCC to protect our rights.
Skywire is just a prototype of a larger system. Eventually we will allow people to sell bandwidth, computational resources and storage.
On the hardware side, the Skywire Miner is a like a personal cloud, for blockchain applications. It has eight computers in it and you plug it in and you can run your blockchain applications on it. You can even earn coins by renting out capacities to other users on the network.
 
How would your everyday, average Joe user access the Skywire network? Let's say from their phone…
We designed Skywire and Skycoin to be as usable as possible. We think you should not have to be a software developer to use blockchain applications.
Skywire is designed to be “zeroconf”, with zero configuration. You just plug in your node and it works. Its plug and play.
Eventually you will be able to buy a Skywire Miner and delegate control of the hardware to a “pool”, who will configure it for you and do all the work, optimize the settings and the pool will just take a small fee for the service and owner of the hardware will receive the rest of the coins their miners are earning.
You will just plug in the Skyminer and start earning coins. It will be plug and play.
Most users will not know their traffic is being carried over Skywire. Just like they do not know if they are using TCP or UDP. They will just connect their computer to the network with wifi or an ethernet cable and it will work exactly like the internet does now.
 
Are you completely anonymous on Skywire, or do you need to add a VPN and go through Tor for extra protection?
Skywire is designed, to protect users privacy much better than the existing internet. Each node only knows the previous hop and the next hop for any packet. The contents of the packet are encrypted (like HTTPS), so no one can spy on the data.
Since Skywire is designed to be faster than the existing internet, you give up a little privacy for the speed. Tor makes packets harder to trace by reshuffling them and slowing them done. While Skywire is designed for pure speed and performance.
 
Will Skywire users be able to access traditional internet resources like Google and Facebook over Skywire?
Yes. Most users will not even know they are using Skywire at all. It will be completely invisible to them.
Skywire has two modes of operation. One mode looks like the normal internet to the user and the other mode is for special applications designed to run completely inside of the Skywire network. Skywire native apps will have increased privacy, speed and performance, but all existing internet apps will still work on the new network.
 
How difficult will it be for a traditional e-service to port their products and services to Skywire / Skycoin? Are there plans in place to facilitate those transitions as companies find the exceeding value in joining the free distributed internet?
We are going to make it very easy. Existing companies run their whole internal networks on MPLS and Skywire is almost identical to MPLS, so they wont have to make any changes in most cases.
 
What is the routing protocol? How are the routes found?
Skywire is source routed. This means that you choose the route your data takes. You can chose routes that offer higher privacy, more bandwidth (for video downloads) or lower latency (for gaming).
Skywire puts control of the data back to the user.
 
I have also understand that the protocols underlying in skywire will be/already are pretty different from the Internet protocols. Taking into account the years of research applied to the current Internet and the several strategies for routing it doesn't seem an easy task to rebuild everything and make it work. Where can be found the information about the routing strategies used in skywire?
The routing strategies are user defined. There is no best routing strategy that is optimal for every user or application. Instead we allow people to choose their routes and policies, based upon the application, time of day, available bandwidth, reliability and other factors.
This is actually the way the original internet worked. However, it was scrapped because of the RAM limitations of early computers which only had 4 KB of memory. So the internet was built upon stateless routing protocols because of the limitations of the available computers at the time, not because the networking protocols were the best or highest performance. Today even a cell phone has 4 GB of ram and 1 million times the memory of a computer in the 1980s, so there is no reason to accept these limitations anymore.
Our implementation is simpler and faster because we are stripping away the layers of junk that have accumulated. The internet was actually built up piecemeal, without any coherence, coordination or planning. The internet today is a mishmash of different ad-hoc protocols that have been duct taped together over decades, without any real design.
Skywire is an re-envisioning of the internet, if it was built today knowing what we know now. This means simplifying the protocols and improving the performance.
 
How will the routing work if someone from Europe wants to access a video from a node in Australia (for example)? How do the nodes know the next hop if they cant read the origin or destiny of any packet?
If you have a route with N hops, then you contact each of the nodes on the route (through a messaging service) and set the route table on each route. Then when you drop a packet in the route, it gets forwarded automatically. You could have 60 or 120 hops between Australia and Europe and its fine.
Each individual node only knows the previous hop and the next hop in the chain. That is all the node needs to know.
 
Could you estimate a timeline for when Skywire will operate independently from the current ISP infrastructure?
I think Skycoin is a very ambitious project and some parts could take ten or twenty years. Even if we started with a network of a few thousand nodes and we were growing the network over 1% per day, it will still take a decade or two to conquer the Earth.
We are going to start with small scale prototypes (neighborhoods), then try cities. I think the first demonstration networks will be working this year.
 
How will bandwidth be priced in terms of coin hours and who determines this rate?
You could have 40 PHDs each do a thesis on this. The short answer is that an auction model has to be used (similar to Google’s Ad Words auction model) and the auction has to be designed in a way so that the bandwidth prices reach a stable equilibrium.
There are parts of Skycoin that are completely open source and public, like the blockchain and consensus algorithm and Skywire. There are secrets like the auction model and pricing, that are designed to protect Skycoin from being forked and to prevent competitors from copying our work.
We estimate that if a competitor was to start today, with 2 million dollars a year in R&D, that it would take them a minimum of eight years to develop a working bandwidth pricing model. And from experience in auction models for advertising networks, 80% of the competitors will fail to develop a working model at all.
A working, fair, decentralized bandwidth pricing model that was competitive with what we have would take even longer. There are very few people (less than 4) on Earth who have the experience in mathematics, economics, game theory and cryptographic protocols to design the required auction and pricing models.
One of Google’s secrets that allows them to dominate the internet advertising industry, is their auction model for ad pricing. That is what allows Google to pay the content producers the most money for their advertising inventory, while charging the advertising buyers the least.
Google’s auction models for pricing AdSense inventory are even more secretive and important than Google’s search algorithm. This is one of the most important and secretive parts of Google’s business. Even companies like Facebook, with billion dollar war chests have been unable to replicate to close the algorithm gap in this area. Expertise in these algorithms and their auction and pricing models is one of the reasons that Google has been able to extract advertising premiums over Facebook.
Even if a competitor raises a billion dollars and hires all the PHDs in the field and they had ten years to do research, I doubt they would be able to develop anything close to what we have now.
The history of bandwidth markets is very interesting and Enron tried to do a trading desk for bandwidth and bandwidth futures and it completely failed. The mathematical stability and predictability of the pricing of bandwidth under adversarial conditions is one of the major problems.
For instance, one of our “competitors” suggests that people will be paid coins if someone accesses their content. So why don’t you just put a website and then have 2000 bots go to it, to get free coins! How are they going to stop that.
Or if they are pricing bandwidth, if the price is fixed and the price is too low, then people will not build capacity and bandwidth will be insufficient and the network will be slow.
Or if the price is variable and adjusts with demands, what will stop someone from buying up the capacity for a link (“Cornering the Market”) to drive the price up 50x on links they control and extort money out of the other people on the network with a fake bandwidth shortage?
The pricing algorithm has to be stable under adversarial conditions. It is a very difficult problem, harder than even consensus algorithm research. Even if a competitor had unlimited funding and unlimited time, it is unlikely that they would find a superior solution to what we have and that alone nearly guarantees that we are going to win this market. It gets even more difficult if you need price stability and you admit any type of bandwidth futures, that allow speculation on future prices. This is a kind of problem like Bitcoin consensus algorithm that can only be solved by an act of genius.
We have a lot of experience in this area. It is hyper specialized and a very difficult area and is one of the areas that will give Skycoin a strong sustainable advantage.
 
Will there be a DNS for Skywire to register .sky domains?
Of course. We will definitely add some kind of DNS and name system eventually.
Remembering and typing public keys is too difficult. We want to make it as easy as possible. We want people to be able to register aliases (like screen names) so that people can send coins to aliases instead of having to type in addresses every time.
This will let people send 5 Skycoin to “@bobcat” instead of sending coins to “23TeSPPJVZ9HvXh6iYiKAaLNQroKg8yCdja”. This will be a revolution in usability.
 
When operating a Skyminer, will people in my surrounding area see it as a Wifi option on their devices?
You can configure it to expose a wifi access point. It depends on what you are trying to do.
 
While I plan on running a DIY miner regardless of the payout, will one of the first 6000 DIY miners built to the same spec as the official miner receive a worthwhile payout in Sky coin? What is the requirement for a DIY miner to get whitelisted (and earning Skycoin) on the Skywire testnet?
The reason we have white-listing on the testnet, is to stop too many nodes from joining the network at once. The network can only support so many nodes until we upgrade certain infrastructure (like the messaging/inter-process communication standard).
Eventually, all DIY miners will be whitelisted, but there will probably be a queue.
 
The Sky team is developing antennas by their own instead of buying or using technology already developed, why is such an effort necessary?
You can of course, buy any commercial antenna or wifi system and use it for Skywire.
We are developing our own custom antennas, to push performance limitations and experiment with advanced technology, like FPGAs (Field Programmable Arrays) and SDR (Software Defined Radio).
Existing wifi has a huge latency (15 milliseconds per hop). We need to make several modification to get that down to 0.5 millisecond per hop.
We have several custom PCB boards in development. We have a few secret hardware projects that will be announced when they are ready.
For instance, the Skywire Miner was in development for two years before we publicly announced it. Some of our next hardware projects are focused on payments at the point of sale and improving usability, not just the meshnet.
 
So back in January Steve was asked a question in the skywire group: "Steve, I am not a tech savage, so how can I understand better the safety running a miner if people on the network do DeepWeb stuff? So i will receive and redirect data packets with crazy things and also there is around 128 GB of storage on my miner. How can i have peace of mind of that?" He replied with "If you don’t run an exit node to the open internet it won’t matter you can run relay nodes if you’re worried about it, or proxy specific content." This seems to goes counter to what you mentioned regarding end-to-end encryption with Skywire. Will some people only be relay nodes and some will be exit nodes as well?
I think the question is wrong.
You only store content for public keys that you explicitly subscribe to.
This means if you do not like particular content or do not want it on your hardware, then you can just blacklist those public keys or don’t subscribe to them. Data never goes on your machine unless you requested it.
If you are holding data for a third party such as forwarding packets, it’s always going to be encrypted, so will look like random noise. There will never be anything in the data that causes legal liability. It will look the same as the output of a random number generator.
 
If using the skyminer, how much bandwidth will be necessary to run it at its best? And what about the router? It's true it has only 100mbits output? Is a 1gigbits connection necessary to reach toprates?
Hold on!!!! Let us get the software and test net running first, lol. We will know once we know what works for the testnet.
 
What will the price be for future Skynodes (formerly called Skyminers)?
We are working on ways of reducing the cost, such as by buying our own factory, doing custom PCB boards and using different materials.
The cheapest Skywire Miner node will be about $30 for a single node miner. We will have a very cheap personal Skywire “hardware VPN” node also.
The miners we are shipping now are for powering the network backbone and have 8 computers and are about $800 each. We sold people the miners for 1 BTC each so they can support development, but gave them a Skycoin bonus equal to about 1 BTC worth of Skycoin.
Then that money, went to fund the cost for developing the newer hardware.
submitted by MuSKYteer to skycoin [link] [comments]

The Problem with PoW

"Frustrated Miners"

The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.

Hashrates and Hardware Types

While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.

2 Guys 1 ASIC

One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.

Implications of Centralization

This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.

The Rise of FPGAs

With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called Verus Coin (https://veruscoin.io/) and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.

All is not lost thanks to.. um.. Technology?

Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"

If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.

In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to CryptoTechnology [link] [comments]

How to Trade or Buy Bitcoin with Luno Uganda - YouTube BITCOIN LIVE BUY SELL SIGNALS 🔥💯 💰 - YouTube How to buy and sell Bitcoin - Bitcoin 101 - YouTube How To Buy And Sell Bitcoin On Paxful - YouTube The ART of FOMO - Buy HIGH Sell LOW - Bitcoin trending ...

The 1% transaction burn makes the BitcoinSOV (BSOV) cryptocurrency deflationary by design compared to the original Bitcoin (BTC), therefore stimulating supply scarcity and reducing token velocity. You are penalized by sending your BSOV, so to avoid loss you are incentivized to hold the tokens instead. This deflationary mechanism may make BitcoinSOV better suited as a Store-of-Value (SoV). Buying Bitcoin with cash can be as simple as giving money to your friend in exchange for BTC. For those who don’t know anyone with BTC (or anyone that wants to sell), there are decentralized, p2p sites to meet with people. LocalBitcoins works worldwide as an advertising community board for users to agree on a price beforehand, and then meet in person to trade. It might take a lot of time to ... FPGA. A Field Programmable Gate Array (FPGA) is a type of integrated circuit. A benefit of using an FPGA is that they can be given a unique configuration which can lower the amount of electricity consumed. FPGAs are one of the types of integrated circuits used for mining in the virtual currency industry. FPGAs can increase the profitability of mining rewards (coinbase) and can flexibly respond ... Get the best deals on Scrypt FPGA Bitcoin Miners when you shop the largest online selection at eBay.com. Free shipping on many items Browse your favorite brands affordable prices. FPGA. A Field Programmable Gate Array (FPGA) is a type of integrated circuit. A benefit of using an FPGA is that they can be given a unique configuration which can lower the amount of electricity consumed. FPGAs are one of the types of integrated circuits used for mining in the virtual currency industry. FPGAs can increase the profitability of mining rewards (coinbase) and can flexibly respond ...

[index] [43584] [42431] [43970] [16463] [10867] [17657] [42378] [46198] [33938] [45036]

How to Trade or Buy Bitcoin with Luno Uganda - YouTube

https://rebrand.ly/Goldco6 Join Now Wealthsimple Crypto: Buy & Sell Bitcoin & Ethereum Instantly. Fundamentals Explained, how to invest in cryptocurrency Gol... Bkash To Skrill Exchange Dollar Buy Sell খুব সহজে অনলাইনে ডলার ক্রয়-বিক্রয় করুন Contact Us: Mobile: +880 1996-925602 ... MY ALL-ENCOMPASSING GUIDE TO GETTING STARTED WITH BITCOIN: https://www.btcsessions.ca/post/how-to-buy-sell-and-use-bitcoin-in-canada SHOW RESOURCES: Visit an... In this episode, you’ll discover the various different methods on how to buy and sell Bitcoin and Bitcoin Cash. Remember to subscribe to our Youtube channel ... Learn how to trade bitcoin in Uganda with your luno app and platform you can buy and sell your bitcoin safely. I recommend Luno because it has made it easy f...

#