Bitcoin Cash Explorer - Bitcoin, Litecoin, Namecoin ...

Bitcoin Cash Block Sizes Average Less Than 100 KB, Defeating The Point Of Its Creation

Bitcoin Cash Block Sizes Average Less Than 100 KB, Defeating The Point Of Its Creation

https://preview.redd.it/uf07qg21zd721.png?width=690&format=png&auto=webp&s=40673169d928f79d51ec297cfdbb5c19e215e1b0
https://cryptoiq.co/bitcoin-cash-block-sizes-average-less-than-100-kb-defeating-the-point-of-its-creation/
Bitcoin Cash (BCH) forked from the Bitcoin (BTC) blockchain in August 2017, amid a heated block size debate. At the time the Bitcoin network was undergoing congestion due to increased transaction frequency and transaction fees began to exceed $1, going as high as $3 in June 2017.
The Bitcoin Cash community thought that increasing Bitcoin’s block size limit was the best method to increase scalability. Initially, when Bitcoin Cash was created, it had a block size limit of 8 MB, and this was later increased to 32 MB. But Bitcoin Cash (BCH) has a very low rate of adoption, and block sizes currently average less than 100 KB, making the block size increase above Bitcoin’s (BTC) 1 MB totally pointless, defeating the purpose of Bitcoin Cash (BCH).
The block explorer shows how Bitcoin Cash (BCH) cannot even reach 1 MB block sizes, let alone 32 MB. Block sizes of less than 10 KB are common, and there is an occasional block less than 1 KB. Blocks in excess of 100 KB are rare, and there are no blocks today anywhere near 1 MB. Therefore, Bitcoin Cash (BCH) could have a block size of 1 MB and function perfectly. The long term block size chart shows that block sizes have averaged well below 100 KB throughout December 2018.
There are a few instances in 2018 when Bitcoin Cash (BCH) exceeded 1 MB block sizes. In early September average block size briefly hit 1-3 MB, but this was from a “stress test” to prove transaction fees do not increase from increased transactions on the network.
In November, Bitcoin Cash (BCH) split into Bitcoin Cash ABC (now named Bitcoin Cash) and Bitcoin SV. The war between these Bitcoin Cash forks caused spam attacks that increased block sizes to 1-2 MB on average.
On Jan. 15 the average Bitcoin Cash (BCH) block size approached 5 mb, coinciding with the price of Bitcoin Cash (BCH) crashing from $2,700 to $1,500. This is perhaps the 1 case where Bitcoin Cash’s network legitimately had block sizes over 1 Mb, but it was due to people dumping their Bitcoin Cash (BCH) as fast as possible in a panic selling situation.
In summary, since Bitcoin Cash (BCH) has relatively low network activity when compared to Bitcoin (BTC), it seems that there was no point in creating Bitcoin Cash (BCH), since its block sizes are almost always below 100 KB.
Bitcoin (BTC) seems to have resolved its transaction fee problems with Segregated Witness (SegWit), which increases the block size to 1.2 MB on average. This is done by redefining the block size in terms of 1,000 units instead of 1,000 KB, and separating the witness data (signature data) from the Merkle Tree and counting each KB of the witness data as ¼ of a unit.
Also, the Bitcoin Lightning Network is maturing and can handle as much transaction volume as Bitcoin needs without increasing on-chain transactions or block size. In November 2018, the Lightning Network rapidly grew in capacity due to increasing Bitcoin transaction volume and proved that it is a solution which can completely mitigate rises in Bitcoin transaction fees. The fact that Bitcoin (BTC) has become scalable to increased transaction frequency makes the creation of Bitcoin Cash (BCH) even more pointless.
submitted by turtlecane to CryptoCurrency [link] [comments]

EOS - Getting Started & Helpful Links

WELCOME TO eos!

Table of Contents

  1. What is EOS?
  2. Why is EOS Different?
  3. Get Started
    1. WHAT IS AN EOS ACCOUNT?
      1. GET FREE EOS ACCOUNTS
      2. WHAT IS REX AND HOW TO USE IT FOR RESOURCES
      3. DECENTRALIZED FINANCE (DEFI) ON EOS
  4. Channels, dApps, Block Explorer and more
    1. Governance and Security
    2. Wallets
    3. DApps
    4. Popular dApps
    5. Block Explorers
      1. REX User Interfaces
  5. Channels
  6. FAQ

What is EOS?

EOS is a community-driven distributed blockchain, that allows the development and execution of industrial-scale decentralized applications (dApps). Therefore, EOS intention is to become a blockchain dApp platform that can securely and smoothly scale to thousands of transactions per second, all while providing an accessible experience to app developers, entrepreneurs, and users. They aim to provide a complete operating system for decentralized applications by providing services like user authentication, cloud storage, and server hosting.
The EOS.IO open-source code was developed and it is currently updated by Block.One. Block.One is based in the Cayman Island and it is managed by Brendan Blumer (CEO), Daniel Larimer (CTO) and Andrew Bliss (CFO).
Links:
Video:

How is EOS different?

EOS is the first Blockchain that focuses on building a dApps platform by using the delegated proof-of-stake consensus mechanism. With dPoS, EOS manage to provide a public blockchain with some particular features, such as Scalability, Flexibility, Usability and Governance.
Further Reading:

Get Started:

WHAT IS AN EOS ACCOUNT?

From EOS Beginners: Anatomy of an EOS Account
An EOS account is a readable name that is stored on the EOS blockchain and connected to your “keys”. An EOS account is required for performing actions on the EOS platform, such as sending/receiving tokens, voting, and staking.
Each account is linked to a public key, and this public key is in turn linked to a private key. A private key can be used to generate an associated public key, but not vice versa. (A private key and its associated public key make up a key pair)
These keys ensure that only you can access and perform actions with your account. Your public key is visible by everyone using the network. Your private key, however, will never be shown. You must store your private keys in a safe location as they should not be shared with anyone (unless you want your EOS to be stolen)!
TLDR: EOS Accounts are controlled by key pairs and store EOS tokens in the Blockchain. Wallets store key pairs that are used to sign transactions.

GET FREE EOS ACCOUNTS

From EOS Onboarding: Free Accounts
Unlike other chains in the space, EOSIO accounts do not typically charge a transfer fee for sending tokens or providing actions on the blockchain. Where Bitcoin and Ethereum mine blocks and charge a fee, EOS provides feeless transactions to users based on CPU, NET, and RAM resources.
Although traditionally those wanting to create EOS accounts in particular have needed to ‘pay a fee’ to get into the system, in reality this fee is nothing more than a basic stake of CPU and NET resources. In theory this provides free transactions on the network, the number of transactions that any user gets in a 24 hour window is determined by the amount of stake especially in CPU that any given account maintains.
This guide then provides a brief overview of the account creation process of some of those account types that allow for easy no friction EOS mainnet onboarding and in most cases, the provision of more than enough resources to be able to utilize the network without having to go through the process of buying, transferring, and staking or renting resources ensuring your account remains operational.

WHAT IS REX AND HOW TO USE IT FOR RESOURCES

What is REX?
REX (Resource Exchange) is a resource market that can meet the demand, where the EOS token holders can lease out tokens in return for “rent”, and the Dapps can lease resources they need with less cost.
For EOS holders:Earn an income via put your spare EOS tokens in REX instead of just keep it on your EOS account.
For EOS Dapps:Lease as much resources as you need at a decent price instead of stake EOS for resources in 1:1 ratio.
Source: TokenPocket
Through REX you can pay a small amount of EOS to receive a much larger amount in CPU or NET for a whole month. Today (August 20, 2020), paying 1 EOS on REX guarantees you 7,500 EOS in CPU for 30 days.
You can easily use REX via Anchor Wallet, importing your EOS Account and with a few simple clicks. Learn how to use REX with Anchor

DECENTRALIZED FINANCE (DEFI) ON EOS

Decentralized Finance (DeFi) is the combination of traditional financial instruments with decentralized blockchain technology. Currently DeFi is the fastest growing sector in blockchain, which allows greater inclusion within the financial system even for those who previously could not participate in the global economy. Indeed, to use DeFi products is enough just a smartphone, and the so-called "unbanked", ca now participate without any restrictions.
DeFi Projects on EOS
  • VIGOR - VIGOR protocol is a borrow, lend, and save community
  • Defibox - One-stop EOS DeFi Platform
  • Equilibrium (EOSDT) - Equilibrium is an all-in-one Interoperable DeFi hub
  • [PredIQt](prediqt.everipedia.org) - PredIQt is a prediction market & oracle protocol
  • PIZZA - PIZZA is an EOS based decentralized stablecoin system and financial platform
  • Chintai - high performance, fee-less community owned token leasing platform

Channels, dApps, Block Explorer and more

Governance and Security:

Wallets:

DApps:

Popular dApps:

  • NewDex - Decentralized Exchange.
  • Prospectors -MMO Game with Real Time Economic Strategies
  • Everipedia - Wiki-based online encyclopedia
  • Upland - Property trading game with real-world addresses
  • Crypto Dynasty - Play-to-Earn with Crypto Dynasty

Block explorers:

Guides to vote:

REX User Interface:

Channels:

Official:
Community:
Telegram:
Telegram Non-English General:
Developers:
Testnets:

FAQ:

submitted by eosgo to eos [link] [comments]

A technical dive into CTOR

Over the last several days I've been looking into detail at numerous aspects of the now infamous CTOR change to that is scheduled for the November hard fork. I'd like to offer a concrete overview of what exactly CTOR is, what the code looks like, how well it works, what the algorithms are, and outlook. If anyone finds the change to be mysterious or unclear, then hopefully this will help them out.
This document is placed into public domain.

What is TTOR? CTOR? AOR?

Currently in Bitcoin Cash, there are many possible ways to order the transactions in a block. There is only a partial ordering requirement in that transactions must be ordered causally -- if a transaction spends an output from another transaction in the same block, then the spending transaction must come after. This is known as the Topological Transaction Ordering Rule (TTOR) since it can be mathematically described as a topological ordering of the graph of transactions held inside the block.
The November 2018 hard fork will change to a Canonical Transaction Ordering Rule (CTOR). This CTOR will enforce that for a given set of transactions in a block, there is only one valid order (hence "canonical"). Any future blocks that deviate from this ordering rule will be deemed invalid. The specific canonical ordering that has been chosen for November is a dictionary ordering (lexicographic) based on the transaction ID. You can see an example of it in this testnet block (explorer here, provided this testnet is still alive). Note that the txids are all in dictionary order, except for the coinbase transaction which always comes first. The precise canonical ordering rule can be described as "coinbase first, then ascending lexicographic order based on txid".
(If you want to have your bitcoin node join this testnet, see the instructions here. Hopefully we can get a public faucet and ElectrumX server running soon, so light wallet users can play with the testnet too.)
Another ordering rule that has been suggested is removing restrictions on ordering (except that the coinbase must come first) -- this is known as the Any Ordering Rule (AOR). There are no serious proposals to switch to AOR but it will be important in the discussions below.

Two changes: removing the old order (TTOR->AOR), and installing a new order (AOR->CTOR)

The proposed November upgrade combines two changes in one step:
  1. Removing the old causal rule: now, a spending transaction can come before the output that it spends from the same block.
  2. Adding a new rule that fixes the ordering of all transactions in the block.
In this document I am going to distinguish these two steps (TTOR->AOR, AOR->CTOR) as I believe it helps to clarify the way different components are affected by the change.

Code changes in Bitcoin ABC

In Bitcoin ABC, several thousand lines of code have been changed from version 0.17.1 to version 0.18.1 (the current version at time of writing). The differences can be viewed here, on github. The vast majority of these changes appear to be various refactorings, code style changes, and so on. The relevant bits of code that deal with the November hard fork activation can be found by searching for "MagneticAnomaly"; the variable magneticanomalyactivationtime sets the time at which the new rules will activate.
The main changes relating to transaction ordering are found in the file src/validation.cpp:
There are other changes as well:

Algorithms

Serial block processing (one thread)

One of the most important steps in validating blocks is updating the unspent transaction outputs (UTXO) set. It is during this process that double spends are detected and invalidated.
The standard way to process a block in bitcoin is to loop through transactions one-by-one, removing spent outputs and then adding new outputs. This straightforward approach requires exact topological order and fails otherwise (therefore it automatically verifies TTOR). In pseudocode:
for tx in transactions: remove_utxos(tx.inputs) add_utxos(tx.outputs) 
Note that modern implementations do not apply these changes immediately, rather, the adds/removes are saved into a commit. After validation is completed, the commit is applied to the UTXO database in batch.
By breaking this into two loops, it becomes possible to update the UTXO set in a way that doesn't care about ordering. This is known as the outputs-then-inputs (OTI) algorithm.
for tx in transactions: add_utxos(tx.outputs) for tx in transactions: remove_utxos(tx.inputs) 
Benchmarks by Jonathan Toomim with Bitcoin ABC, and by myself with ElectrumX, show that the performance penalty of OTI's two loops (as opposed to the one loop version) is negligible.

Concurrent block processing

The UTXO updates actually form a significant fraction of the time needed for block processing. It would be helpful if they could be parallelized.
There are some concurrent algorithms for block validation that require quasi-topological order to function correctly. For example, multiple workers could process the standard loop shown above, starting at the beginning. A worker temporarily pauses if the utxo does not exist yet, since it's possible that another worker will soon create that utxo.
There are issues with such order-sensitive concurrent block processing algorithms:
In contrast, the OTI algorithm's loops are fully parallelizable: the worker threads can operate in an independent manner and touch transactions in any order. Until recently, OTI was thought to be unable to verify TTOR, so one reason to remove TTOR was that it would allow changing to parallel OTI. It turns out however that this is not true: Jonathan Toomim has shown that TTOR enforcement is easily added by recording new UTXOs' indices within-block, and then comparing indices during the remove phase.
In any case, it appears to me that any concurrent validation algorithm would need such additional code to verify that TTOR is being exactly respected; thus for concurrent validation TTOR is a hindrance at best.

Advanced parallel techniques

With Bitcoin Cash blocks scaling to large sizes, it may one day be necessary to scale onto advanced server architectures involving sharding. A lot of discussion has been made over this possibility, but really it is too early to start optimizing for sharding. I would note that at this scale, TTOR is not going to be helpful, and CTOR may or may not lead to performance optimizations.

Block propagation (graphene)

A major bottleneck that exists in Bitcoin Cash today is block propagation. During the stress test, it was noticed that the largest blocks (~20 MB) could take minutes to propagate across the network. This is a serious concern since propagation delays mean increased orphan rates, which in turn complicate the economics and incentives of mining.
'Graphene' is a set reconciliation technique using bloom filters and invertible bloom lookup tables. It drastically reduces the amount of bandwidth required to communicate a block. Unfortunately, the core graphene mechanism does not provide ordering information, and so if many orderings are possible then ordering information needs to be appended. For large blocks, this ordering information makes up the majority of the graphene message.
To reduce the size of ordering information while keeping TTOR, miners could optionally decide to order their transactions in a canonical ordering (Gavin's order, for example) and the graphene protocol could be hard coded so that this kind of special order is transmitted in one byte. This would add a significant technical burden on mining software (to create blocks in such a specific unusual order) as well as graphene (which must detect this order, and be able to reconstruct it). It is not clear to me whether it would be possible to efficiently parallelize sorting algortithms that reconstruct these orderings.
The adoption of CTOR gives an easy solution to all this: there is only one ordering, so no extra ordering information needs to be appended. The ordering is recovered with a comparison sort, which parallelizes better than a topological sort. This should simplify the graphene codebase and it removes the need to start considering supporting various optional ordering encodings.

Reversibility and technical debt

Can the change to CTOR be undone at a later time? Yes and no.
For block validators / block explorers that look over historical blocks, the removal of TTOR will permanently rule out usage of the standard serial processing algorithm. This is not really a problem (aside from the one-time annoyance), since OTI appears to be just as efficient in serial, and it parallelizes well.
For anything that deals with new blocks (like graphene, network protocol, block builders for mining, new block validation), it is not a problem to change the ordering at a later date (to AOR / TTOR or back to CTOR again, or something else). These changes would add no long term technical debt, since they only involve new blocks. For past-block validation it can be retroactively declared that old blocks (older than a few months) have no ordering requirement.

Summary and outlook

Taking a broader view, graphene is not the magic bullet for network propagation. Even with the CTOR-improved graphene, we might not see vastly better performance right away. There is also work needed in the network layer to simply move the messages faster between nodes. In the last stress test, we also saw limitations on mempool performance (tx acceptance and relaying). I hope both of these fronts see optimizations before the next stress test, so that a fresh set of bottlenecks can be revealed.
submitted by markblundeberg to btc [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Weekly Update: Mycro on ParJar, PAR on MetaMorphPro, new customer for Resolvr, 1UP on IDEX... – 19 Jul - 25 Jul'19

Weekly Update: Mycro on ParJar, PAR on MetaMorphPro, new customer for Resolvr, 1UP on IDEX... – 19 Jul - 25 Jul'19
Heya everyone, looks like we are in for another round of rapid catch ups on the weekly updates. Haha. Here's another exciting week at Parachute + partners (19 Jul - 25 Jul'19):

In honour of our latest partnership with Silent Notary, this week we had an SNTR Parena. Richi won the finale to take home a cool share from the 1.5M SNTR pot. The weekly Parena had a 100k PAR pot. McPrine took home the lion’s share by beating Ken in a closely fought finale. In 8 months since ParJar started, we are now at 12k users, 190k transactions and 200+ communities. Cap says: “…to put it into perspective - June 18th we were around 100k transactions and 9 k users. A month later we’ve added 3k new users (33% growth) and 80,000 new transactions”. Freaking amazing! And thank you for the shoutout aXpire! MYO (Mycro) was added to ParJar this week. And their community started experiencing the joys of tipping.
Lolarious work by @k16v5q5!
Last week MetaMorphPro did a Twitter vote to list new projects. Turns out Parachuters did PAR a solid. Woot woot! The first ever official TTR shirt is already live in the Parachute shop. Alexis announced the start of a shirt design contest to add to the TTR shirt inventory. Ian’s art quiz in TTR this week saw 25k PAR being given away to winners. Victor’s quiz had another 25k PAR pot for the winners. And Unique’s Math quiz in TTR was a 50k PAR extravaganza. All in all, 100k PAR won in quizzes in TTR this week. Sweet! Cryptonoob (Tom) set up a survey this week for “..for people who are interested in Crypto but don't know where to start..” for his work on the Parachute app UX. We all know how much Gian loves the reality show Big Brother. So we saw a new take on his Tuesday fun events. Mention your favourite reality show and what it’s all about to get some cool PAR. Yay!
A PAR coaster makes its way from design to final product in @k16v5q5’s workshop
Chris’ Golf tourney contest resulted in no winners since there were no correct guesses. So he decided to give out fun prizes instead: like Jason for coming last, Win for a “hilariously bad guess” of 100 strokes for the champions total score etc. Haha. However, there were a few top prize winners as well. LordHades, with a tournament score of 1968, took home 50k PAR as grand prize. Neat! Ali, Hang, Clinton and Tony came in close at 2nd to 5th positions. Congrats! And with that, Chris announced the start of another contest: Premier League Challenge for Parachuters (Entry code: x0zj2d) with an entry fee of 5000 PAR each. Prize pool yet to be announced. Jason is still in the lead this week in the Big Chili Race at 47 cm. Not much change either in the other plants. Slow week at Chili land.
Ric getting in on that sweet Parachute merch
Last week we shared that AXPR got listed on Binance Dex. The ERC20-BEP2 conversion bridge went live this week. Learn how to convert your ERC20 tokens to the BEP2 variant from the available how-to guides (article/video/gif). To mark the occasion, aXpire gave away a ton of BNB in an easter egg contest plus a 1% AXPR deposit bonus to folks who started using the bridge. Remember, we had mentioned that the reason for the weekly double burn of AXPR will be revealed this week? Well here it is. Resolvr onboarded a new client: HealthGates. More fees, more burn. Read more about it here. Woot! Victor hosted a trivia like every week on Friday at aXpire for 1000 AXPR. 10 questions. 100 AXPR each. Nice! Catch up on the week that was at aXpire from their latest video update. 2gether was selected as one of the top 100 most innovative projects by South Summit this week. Cryzen now built a Discord-Telegram chat bridge so that anything posted in either platform gets cross posted on the other. The latest WandX update covers the dev work that’s been going on for the past few weeks – support for Tezos wallet, staking live for Tezos, Livepeer and Loom etc.
2gether on South Summit’s honour roll
BOMB community member rouse wrote a quick script on how to identify and avoid common crypto scams. Have a read. As BOMB says, “Stay vigilant and always verify”. Last week's giveaway for the top lessons shared by entrepreneurs had so many good entries that the final list was expanded to 19 winners. Awesome stuff! Zach’s latest article on the difference between BOMB and BOMBX explores both the basic and the more complex distinctions. Switcheo’s introductory piece on hyperdeflationary tokens also talks at length about the BOMB project. Zach also announced the start of the Telegram Takeover Challenge this week – get new communities to experience ParJar and BOMB and earn some cool BOMB tokens in return. Win win! In preparation for the integration of the SMS feature in the Birdchain app, the team released an article on some key statistics. Here’s a video from Birdchain CEO Joao Martins discussing the feature. The latest Bounty0x distribution report can be found here. Also, check out a shoutout to the platform in this NodesOfValue article on bounty hunting opportunities.
Start of beta testing for SMS feature in Birdchain
The ETHOS Universal Wallet now supports Bitcoin Cash and Typerium. Following ETHOS’ listing on Voyager, it will also become the native token on Voyager. Switch continued its PR campaign with cover pieces on Yahoo, CCN and DDFX this week. Altcoin Buzz has a section on its site named “Community Speaks” where members of a crypto community share updates on a project they support. This week, Fantom was featured in this section. V-ID is the latest project using Fantom’s ERC20-BEP2 bridge for listing on Binance Dex. Big props to FTM for opening it up to other projects. FTM got listed on Probit and Airswap. FTM can also now be used as collateral for borrowing on the Constant platform. The Fantom Foundation joined the Australian Digital Commerce Association which works on regulatory advocacy in blockchain. This was also a perfect setting for the Fantom Innovation Labs team to attend the APAC Blockchain Conference in Sydney. Here’s a report. In this week’s techno-literature, have a read of the various Fantom mainnets and the TxFlow protocol by clicking here and here respectively.
Another proposed token utility of ETHOS
Uptrennd’s 1UP token was listed on IDEX this week. To put it simply, the growth at Uptrennd Twitter has been explosive. Check out these numbers. Awesome stats! This free speech vs fair pay chart shared by Jeff explains why the community backs the platform. About 96% of 1UP issued this week has been used to level up on Uptrennd. Want a recap of the latest at Uptrennd? Click here. Crypto influencer Didi Taihuttu and his family (The Bitcoin Family) joined the platform this week. Congrats once again to Horizon State for making it to the finals of The Wellington Gold Awards. Some great networking opportunities and exposure right there. If you have been lagging behind on HST news, the latest community update covers the past month. We had also mentioned last week that Horizon State is conducting a vote for The Opportunities Party in New Zealand. Here’s a media report on it. Catch up on the latest at District0xverse from their Weekly and Dev updates. The Meme Factory bot was introduced this week to track new memes and marketplace trends on Meme Factory. The HYDRO article contest started last week was extended to the 27th. 50k HYDRO in prizes to be won. Noice! Hydrogen got nominated as a Finalist to the 2019 FinXTech Awards. HYDRO was also listed on the HubrisOne wallet this week. And finally, here’s a closer look at the Hydro Labs team. The folks who make the magic happen. Sup guys!
The Parachute Big Chili Race Update – Jason at 1st, Sebastian at 3rd
And with that, we close for this week at Parachute and partners. See you again with another weekly update soon.
submitted by abhijoysarkar to ParachuteToken [link] [comments]

Let us not forget the original reason we needed the NYA agreement in the first place. Centralization in mining manufacturing has allowed for pools to grow too powerful, granting them the power to veto protocol changes, giving them bargaining powers where there should be none.

SegWit2x through the NYA agreement was a compromise with a group of Chinese mining pools who all march to the beat of the same drum. Antpool, ViaBTC, BTC.TOP, btc.com, CANOE, bitcoin.com are all financially linked or linked through correlated behavior. Antpool, ConnectBTC and btc.com being directly controlled by bitmain, and ViaBTC and Bitmain have a "shared investor relationship". If bitmain is against position A, then all those other pools have historically followed its footsteps. As Jimmy Song explains here the NYA compromise was because only a small minority of individuals with a disproportionate amount of hashrate were against Segwit (Bitmain and subsidiaries listed above), where the rest of the majority of signatories of NYA were pro-segwit. The purpose of the compromise was to prevent a chain split, which would cause damage to the ecosystem and a loss of confidence in bitcoin generally.
At current time of calculation, according to blockchain.info hashrate charts, these pools account for 47.6% of the hashrate. What does it matter if these pools are running a shell game of different subsidiaries or CEO's if they all follow a single individual's orders? 47.6% is enough hashrate right now to preform a 51% attack on the network with mining luck factored in. This statistic alone should demonstrate the enormous threat that Bitmain has placed on the entire bitcoin ecosystem. It has compromised the decentralized model of mining through monopolizing ASIC manufacturing which has lead to a scenario in which bitcoins security model is threatened.
But let us explore the reasoning behind these individuals actions by taking a look at history. First, Bitmain has consistently supported consensus breaking alternative clients by supporting bitcoin classic, supporting Bitcoin Unlimited and its horrifically broken "emergent consensus" algorithm, responding to BIP148 with a UAHF declaration, and then once realizing that BIP148/BIP91 would be successful at activating Segwit without splitting the network Bitmain abandoned its attempt at a "UAHF", and admitted that bitcoin cash is based on the UAHF on their blog post. The very notion of attempting to compromise with an entity to prevent a split that is supporting a split is illogical by nature and a pointless exercise.
Let us not forget that Bitmain was so diametrically opposed to Segwit that it sabatoged Litecoins Segwit Activation period to prevent Segwit from activating on Litecoin. Do these actions sound like a rational actor who has the best interests of bitcoin at heart? Or does this sound like an authoritarian regime that wants to stifle information at any cost to prevent the public from seeing the benefits that SegWit provides?
But the real question must still be asked. Why? Why would Bitmain who is so focused on increasing the blocksize to reduce fee pressure delay a protocol upgrade that both increases blocksize and reduces fee pressure? If miners are financially incentivized to behave in a way in which is economically favorable to bitcoin, then why would they purposefully sabatoge protocol improvements that will increase the long term success survival of bitcoin?
There is plenty of evidence that suggests covert ASICBOOST, a mechanism in which a ASIC miner short cuts bitcoins proof of work process (grinding nonce, transaction ordering) and an innovation that Bitmain holds a patent for in China is the real reason Bitmain originally blocked SegWits activation. It was speculated by Bitcoin Core developer Gregory Maxwell that this covert asicboost technology could earn Bitmain 100 Million dollars a year.
It is notable that Hardfork proposals that Bitmain has supported, such as Bitcoin Classic, Bitcoin Unlimited, Bitcoin ABC/Bcash and now SegWit2x all preserve Bitmains covert asicboost technology while Segwit the soft fork breaks asicboosts effectiveness.
But if that is not enough of a demonstration of rational economic incentives to behave in such a way, then what about irrational reasons such a idelogical positions or pride?
Its no secret that Chinese miners dislike for bitcoin core matured when the Hong Kong agreement was broken. Many miners have consistently rationlized "firing bitcoin core developers" and we even have a direct account from a bitpay employee that said Jihan directly told him that is his purpose is to "get rid of blockstream and core developers". And while the Hong Kong agreement being broken is quite the muddied waters, there is proof in the blockchain that chinese miners were the first to break the terms of the agreement by mining a block with a alternative client. Some bitcoin core developers continued to work on HardFork proposals despite this, offering up public proposals, BIPs and released code to attempt to satisfy the terms of the agreement. Yet only in hindsight did everyone realize that no individual or individuals can force the entire bitcoin network to upgrade. It is only through the slow methodical process of social consensus building that we can get such a large decentralized global network to agree to upgrade the protocol in a safe manner. Yet to this day we still have bitter idelogical wars over this HK agreement "being broken" despite how long ago, and how clear the situation is in hindsight.
When you take into account the historical record of these individuals and businesses actions it clearly demonstrates a pattern of behavior that undermines the long term health of bitcoin. When you analyze their behavior from a rational economic viewpoint, you can clearly see that they are sabatoging the long term health of bitcoin to preserve short term profits.
Considering this information, why would other bitcoin ecosystem businesses "compromise" with such a malicious actor? Let us not forget that these actors were the entire reason we needed to compromise in the first place went ahead and forked the bitcoin network already creating the first bitcoin-shared-history altcoin, Bitcoin ABC. So we compromised with people to prevent the spliting of bitcoin, so that they could go ahead and split bitcoin? What illogical insanity is this? Why would you "stick to your guns" on an agreement that was nullified the moment Bitmain and ViaBTC supported a hardfork outside of the S2X agreement? Doubly questionably is your support when the hardfork is highly contentious and guaranteed to cause a split, damage bitcoin, create chaos and damage global confidence.
A lot of the signatories of the NYA agreement are payment processors and gateway businesses. Their financial health depends upon short term growth of bitcoin to increase business activity and shore up investors capital with revenue from that transactional growth. Their priorities are to ensure short term growth and to appease their investors. But their actions demonstrate a type of cause and effect that often occurs in markets across the world. By redistributing network resource costs to node operators they are simply shuffling costs to the public so that they can benefit in the short term without needing to allocate extra capital.
But these actions do not benefit the health of bitcoin long term. Splitting the network, once again, does not increase confidence in the bitcoin network. It does not foster growth. Increasing the blocksize after segwit already increases the blocksize will not get us any closer to VISA transaction levels from a statistical viewpoint. Increasing the TPS from 3 to 7 when we need to get to 30,000 TPS is quite an illogical decision at face value. Increasing the blocksize on-chain to get to that level would destroy any pretense at decentralization long before we even came close, and without decentralization we have no cenosorship resistence, fungibility. These are fundamental to the value of bitcoin as a network and currency. Polymath and industry wide respected crypto expert Nick Szabo has written extensively on scaling bitcoin and why layer 2 networks are essential.
To all the Signatories of the SegWit2X I ask you - What are you trying to accomplish by splitting bitcoin once again? What consensus building have you done to ensure that bitcoin wont suffer a catastrophic contentious hard fork? As it stands right now I only see a portion of the economic actors in the bitcoin ecosystem supporting S2X. No where near enough to prevent miners from supporting the legacy chain when there will be a large portion of the economy still operating on the legacy chain preserving its value. Where there is money Its going to be extremely difficult to topple the status quo/legacy network and the cards are stacked against you. Without full consensus from the majority of developers, economic actors/nodes, exchanges, payment processors, gateways, wallets....you will only fork yourself from the legacy network and reap destruction and chaos as the legacy chain and S2X battle it out.
If you truly support bitcoin and are dedicated to the long term success of bitcoin and your business, then why would you engage/compromise with demonstratably malicious actors within the bitcoin ecosystem to accomplish a goal that was designed by them to further monopolize/centralize their control, at the destruction of bitcoins security model?
Bitcoin core developers are actually positive on hardforks and want to eventually increase the legacy blocksize, they just wish to do it in a responsible manner that does not put the network at risk like SegWit2x does.
Also, it seems a rational engineering choice to optimize and compress transactions/protocols before increasing the blocksize. Things like SegWit, Schnorr, MAST are all great examples of things Bitcoin Core has done and is doing to increase on-chain scaling technology to the long term benefit of bitcoin.
The fate of bitcoin will be determined by users who choose when how and where they transact. If businesses attempt to force them on the S2X chain they will abandon those businesses to use a servicor that does not attempt through coercion to force them upon a specific forked network.
Finally, without replay protection there can be no clean split and no free market mechanism to determine the winner. I understand that this is purposefully designed this way, to force a war between the legacy chain and S2X, but if you stand for everything bitcoin stands for, then you as central actors will not try to force people onto your chain. Instead, you should allow the market to decide which chain is more valuable.
If you will not abandon this poisonous hardfork pill then please advocate/lobby to add default replay protection to the btc1 codebase. You cannot claim Free Market principals and then on the other side of your mouth collude with central actors to force protocol changes upon users. Either you believe in bitcoin, or you are here to join the miners in their poorly disguised behaviors to monopolize, subvert and sabatoge bitcoin.
submitted by Cryptolution to Bitcoin [link] [comments]

Transcript from Ravencoin Open Developer Meeting - Nov. 16, 2018

Tron at 2:03 PM

Topics: Messaging (next phase) UI 2.2 - Build from develop - still working out a few kinks Mobile - Send/Rcv/View Assets - In progress Raven Dev Kit -Status

RavencoinDev at 2:04 PM

Hey Everybody! Let's get started!Thanks Tron for posting the topics.Tron is going talk about Messaging Plans.Let's start there.

Chatturga at 2:06 PM

It looks like this channel is not connected to the IRC. One moment

RavencoinDev at 2:07 PM

Well, we going to move forward as the tech guys fix the IRC connections.

Tron at 2:07 PM

I wanted to have a doc describing the messaging, but it isn't quite ready.I understand this isn't going to IRC yet, but I'm starting anyway.

RavencoinDev at 2:08 PM

Look for it soon on a Medium near you.

Tron at 2:08 PM

Summary version: Every transaction can have an IPFS hash attached.

Vincent at 2:09 PM

any plans for a 'create IPFS' button?

RavencoinDev at 2:09 PM

Yes

Vincent at 2:09 PM

on asset creatin window also?

RavencoinDev at 2:09 PM

Yes

Vincent at 2:09 PM

sweet

Tron at 2:09 PM

IPFS attachments for transactions that send ownership token or channel token back to the same address will be considered broadcast messages for that token.The client will show the message.Some anti-spam measures will be introduced.If a token is in a new address, then messages will be on by default.The second token in an address, the channel will be available, but muted by default.

RavencoinDev at 2:11 PM

That way I can't spam out 21b tokens and then start sending messages to everybody.

Tron at 2:11 PM

We'd like to have messaging in a reference client on all six platforms.

corby at 2:11 PM

Hi!

Tron at 2:11 PM

Photos will not be shown. Messages will be "linkified"

RavencoinDev at 2:12 PM

and plain text.We'll start with the QT wallet support

Tron at 2:12 PM

Any other client is free to show any IPFS message they choose.The messaging is fully transparent.

Rikki RATTOE at 2:13 PM

ok, so messaging isn't private

Tron at 2:13 PM

Anyone could read the chain and see the messages.

RavencoinDev at 2:13 PM

No, never was planned to be private

MSFTserver-mine more @ MinerMore at 2:13 PM

irc link should be fixed

Tron at 2:13 PM

It is possible to put encrypted content in the IPFS, but then you'd have to distribute the key somehow.

RavencoinDev at 2:13 PM

Thanks MSFT!

Chatturga at 2:13 PM

Negative

Tron at 2:14 PM

Core protocol changes Extend the OP_RVN_ASSET to include for any transfer: RVNT <0xHH><0x12><0x20><32 bytes encoding 256 bit IPFS hash> 0xHH - File type 0x00 - NO data, 0x01 - IPFS hash, 0x02 through 0xFF RESERVED 0x12 - IPFS Spec - Using SHA256 hash 0x20 - IPFS Spec - 0x20 in hex specifying a 32 byte hash. …. (32 byte hash in binary)

corby at 2:14 PM

By it's nature nothing on chain is private per se. Just like with wallets you'd need to use crypto to secure messaging between parties.

Tron at 2:14 PM

Advantages: This messaging protocol has the advantage of not filling up the blockchain. The message information is public so IPFS works as a great distributed store. If the messages are important enough, then the message sender can run nodes that "PIN" the message to keep a more durable version. The message system cannot be spoofed because any change in the message will result in a different hash, and therefore the message location will be different. Only the unique token holder can sign the transaction that adds the message. This prevents spam. Message clients (wallets), can opt-in or opt-out of messages by channel. Meta-message websites can allow viewing of all messages, or all messages for a token. A simple single channel system is supported by the protocol, but a channel could be sub-divided by a client to have as many sub-channels as desired. There are no limits on the number of channels per token, but each channel requires the 5 RVN fee to create the channel.

RavencoinDev at 2:14 PM

So, somebody could create their own client and encrypt the data on the blockchain if they wished.

corby at 2:15 PM

Wow Tron types fast

Rikki RATTOE at 2:15 PM

yeah there was some confusion in the community whether messaging would be private and off chain

Tron at 2:15 PM

Anti-Spam Strategy One difficulty we have is that tokens can be sent to any Ravencoin asset holder unsolicited. This happens on other asset platforms like Counterparty. In many cases, this is good, and is a way for asset issuers to get their token known. It is essentially an airdrop. However, combined with the messaging capabilities of Ravencoin, this can, and likely will become a spam strategy. Someone who wants to send messages (probably scams) to Ravencoin asset holders, which they know are crypto-savvy people, will create a token with billions of units, send it to every address, and then message with the talking stick for that token. Unless we preemptively address this problem, Ravencoin messaging will become a useless spam channel. Anyone can stop the messages for an asset by burning the asset, or by turning off the channel. A simple solution is to automatically mute the channel (by default) for the 2nd asset sent to an address. The reason this works is because the assets that you acquire through your actions will be to a newly generated address. The normal workflow would be to purchase an asset on an exchange, or through a ICO/STO sale. For an exchange, you'll provide a withdrawal address, and best practice says you request a new address from the client with File->'Receiving addresses…'->New. To provide an address to the ICO/STO issuer, you would do the same. It is only the case where someone is sending assets unsolicited to you where an address would be re-used for asset tokens. This is not 100% the case, and there may be rare edge-cases, but would allow us to set the channels to listen or silent by default. Assets sent to addresses that were already 'on-chain' can be quarantined. The user can burn them or take them out of quarantine.

RavencoinDev at 2:18 PM

Okay, let me know when/if you guys read through all that. 📷📷2

corby at 2:18 PM

To be clear this is a client-side issue -- anyone will be able to send anything (including messages) to any address on chain..

RavencoinDev at 2:18 PM

It'll be in the Medium post later.

Tron at 2:19 PM

@corby The reference client will only show messages signed by the issuer or designated channels.Who is ready for another wall of text? 📷

corby at 2:19 PM

I hear that's the plan 📷 just pointing out that it is on the client in these cases..

Tron at 2:20 PM

Yes, any client can show anything gleaned from the chain.Goal: A simple message format without photos. URL links are allowed and most clients will automatically "linkify" the message for valid URLs. For display, message file must be a valid json file. { "subject":"This is the optional subject", "message": "This is required.", "expires": 1578034800 } Only "message" is required {"message":"Hello world"}

bhorn at 2:21 PM

expires?

Vincent at 2:21 PM

discount coupon?

Tron at 2:21 PM

If you have a message that worthless (say after a vote), just don't show the message.

bhorn at 2:21 PM

i see - more client side operation

corby at 2:21 PM

/expires

Tron at 2:22 PM

Yep. And the expiration could be used by IPFS pinners to stop worrying about the message. Optional

RavencoinDev at 2:22 PM

If the client sees a message that is expired it just won't display it.

Vincent at 2:23 PM

will that me messaged otherwise may cause confusion?"expired'

RavencoinDev at 2:23 PM

YesWe'll do our best to make it intuitive.

Tron at 2:24 PM

Client handling of messages Pop-up messages or notifications when running live. Show messages for any assets sent to a new address - by default Mute messages for assets sent to an address that was already on-network. Have a setting to not show messages older than X IPFSHash (or 8 bytes of it) =

Rikki RATTOE at 2:25 PM

will there be a file size limit for IPFS creation in the wallet?

RavencoinDev at 2:25 PM

We'll also provide updated documentation.

Tron at 2:26 PM

Excellent question Rikki. Here are some guidelinesGuidelines: Clients are free to show or not show poorly formed messages. Reference clients will limit message display to properly formed messages. If subject is missing, the first line of the message will be used (up to 80 chars). Standard JSON encoding for newlines, tabs, etc. https://www.freeformatter.com/json-escape.html Expiration is optional, but desired. Will stop showing the message after X date, where X is specified as Unix Epoch. Good for invites, voting requests, and other time sensitive messages that have no value after a specific date. By default clients will not show a message after X blocks (default 1 year) Amount of subject shown will be client dependent - Reference client may cut off at 80 chars. Messages longer than 15,000 (about 8 pages) will not be pinned to IPFS by some scanners. Messages longer than 15,000 characters may be rejected altogether by the client. Images will not be shown in reference clients. Other clients may show any IPFS content at their discretion. IPFSHash is only a "published" message if the Admin/Owner or Channel token is sent from/to the same address. This allows for standard transfers with metadata that don't "publish".Free Online JSON Escape / Unescape Tool - FreeFormatter.comA free online tool to escape or unescape JSON strings

RavencoinDev at 2:26 PM

We're hoping to add preferences that will allow the user to customize their messaging experience.

Tron at 2:27 PM

Also, happy to receive feedback from everyone.

corby at 2:27 PM

In theory though if you maintain your own IPFS nodes you should be able to reference files of whatever size right?

Steelers at 2:27 PM

How about a simple Stop light approach - Green (ball) New Message, Yellow (Ball) Expiring Messages, Red (Ball) Expired Messages

RavencoinDev at 2:27 PM

Yes please! That's the point of sharing it here

Chatturga at 2:27 PM

Fixt

push | ravenland.org at 2:28 PM

Thanks @Tron can you provide any details of the coming 'tooling' at the end of november, and what that might enable (apologies as I am a bit late to meeting if this has been asked already)

VeronicaBOT at 2:28 PM

sup guys

Tron at 2:28 PM

Sure, that's coming.

RavencoinDev at 2:28 PM

That's the Raven WebDev Kit topic coming up in a few mins.

push | ravenland.org at 2:29 PM

oki 📷 cheers

RavencoinDev at 2:29 PM

Questions on messaging?

Jeroz at 2:30 PM

Not sure if I missed it, but how fast could you send multiple messages in succession?

BruceFenton at 2:30 PM

Some kind of sweep feature or block feature for both tokens and messages could be useful Certain messages will be illegal to possess in certain jurisdictions If someone sends a picture of Tiennneman tank man in China or a message calling for the overthrow of a ruler it could be illegal for someone to have There’s no way for that jurisdiction to censor the chain So some users might want the option to purge messages or not receive them client side / on the wallet

Tron at 2:30 PM

Messages are a transaction.

RavencoinDev at 2:30 PM

So it'll cost you to spam messages.They can only send a hash to that picture and the client won't display anything not JSON

corby at 2:31 PM

purge/block is the age old email spam

Tron at 2:31 PM

The Reference client - other clients / web sites, etc can show anything they wish.

RavencoinDev at 2:31 PM

You can also burn a token if you never want to receive messages from that token owner.

UserJonPizza|MinePool.com|Mom at 2:32 PM

Can't they just resend the token?

Tron at 2:33 PM

Yes, but it would default to mute.📷2

RavencoinDev at 2:33 PM

meaning it would show up in a spam foldetab

bhorn at 2:33 PM

is muting available for the initial asset as well?

RavencoinDev at 2:33 PM

Something easy to ignore if muted.

Tron at 2:33 PM

@bhorn Yes

BruceFenton at 2:33 PM

Can users nite some assets and not others?

Tron at 2:33 PM

@bhorn It just isn't the default.

BruceFenton at 2:33 PM

Mute

RavencoinDev at 2:33 PM

YesYou can mute per token.

BruceFenton at 2:34 PM

Great

Tron at 2:34 PM

And per token per channel.

Jeroz at 2:34 PM

channels are the subtokens?

BruceFenton at 2:34 PM

What’s per token per channel mean ?

Tron at 2:34 PM

The issuer sends to the "Primary" channel.Token owner can create channels like "Alert", "Emergency", etc.These "talking sticks" are similar to unique assets.📷1ASSET~Channel

RavencoinDev at 2:37 PM

Okay, we have a few more topics to cover today.Tron will post more details on Medium and we can continue discussions there.

Jeroz at 2:38 PM

Ah, I missed channel creation bit for each token with the 5 RVN / channel cost. It makes more sense to me now.

RavencoinDev at 2:38 PM

The developers are working towards posting a new version 2.2 that has the updated UI shown on twitter.

Vincent at 2:39 PM

twit link?

RavencoinDev at 2:39 PM

The consuming of large birds (not ravens) might slow the release a bit.So likely the week after Thanksgiving.

[Dev] Blondfrogs at 2:39 PM

The new UI will contain: - New menu layout - New icons - Dark mode - Added RVN colors

Dan1666 at 2:39 PM

+1 Dark mode

RavencoinDev at 2:39 PM

DARK MODE!

Dan1666 at 2:40 PM

so pleased about that

RavencoinDev at 2:40 PM

I can honestly say it'll be the nicest crypto wallet out there.

[Dev] Blondfrogs at 2:40 PM

A little sneak peak, but this is not the final project📷📷6📷3

!S1LVA | MINEPOOL at 2:40 PM

Outstanding

Dan1666 at 2:41 PM

reminds me of Sub7 ui for those that might remember

UserJonPizza|MinePool.com|Mom at 2:41 PM

Can we have an asset count at the top?

[Dev] Blondfrogs at 2:41 PM

Icons will be changing

Vincent at 2:41 PM

does the 'transfer assets' have a this for that component?

Tron at 2:41 PM

Build from develop to see the sneak preview in action.There may be small glitches depending on OS. These are being worked on.

Rikki RATTOE at 2:41 PM

No plans for the mobile wallet to show an IPFS image I'm assuming? Would be a nice feature if say a retail store could send a QR coupon code to their token holders and they could scan the coupon using their wallet in store

[Dev] Blondfrogs at 2:42 PM

@Vincent That will probably be a different section added later📷1

RavencoinDev at 2:42 PM

Yes, Rikki we do want to support messaging.Looking into how that would work with Apple and Google push.

push | ravenland.org at 2:42 PM

sub7📷1hahaoldschoolit so is similar aswell

[Master] Roshii at 2:43 PM

Messages are transactions no need for any push

Tron at 2:43 PM

@Rikki RATTOE There's a danger in showing graphics where anyone can post anything without accountability for their actions. A client that only shows tokens for a specific asset could do this📷1

RavencoinDev at 2:43 PM

True, unless you want to see the messages even if you haven't opened your wallet in a week.

Rikki RATTOE at 2:44 PM

the only thing I was thinking was if you simply linked the image, somebody could just copy the link and text it off to everyone and the coupon isn't all that exclusive

UserJonPizza|MinePool.com|Mom at 2:44 PM

Maybe a mobile link-up for a easy way to see messages by just importing pubkey(edited)

RavencoinDev at 2:45 PM

Speaking of mobileWe are also getting close to a release of mobile that includes the ability to show assets held, and transfer them.Roshii has been hard at work.📷6📷1

Vincent at 2:46 PM

can be hidden also?

RavencoinDev at 2:47 PM

We're still finalizing the UI design but that is on the list of todos📷1

Under at 2:47 PM

Could we do zerofee mempool messaging that basically gets destroyed after it expires out of the mempool for real-time stealth mode messaging

corby at 2:48 PM

That's interesting!

RavencoinDev at 2:49 PM

There are other solutions available for stealth messaging, that's not what the devs had intended to build. It does sound cool though @Under

Under at 2:50 PM

📷 we’ll keep up the good work. Looking forward to the db upgrades. Will test this weekend

RavencoinDev at 2:50 PM

Thanks!That leaves us with 10 minutes for the Dev Kit!Corby has been working on expanding some of the awesome work that @Under has been doing.

corby at 2:52 PM

Yes -- all of the -addressindex rpc calls are being updated to work with assets

RavencoinDev at 2:52 PM

Hopefully we'll be able to post the source soon once the initial use cases are all working.

corby at 2:52 PM

so assets are being tied into transaction history, utxos, etc

RavencoinDev at 2:52 PM

The devs want to provide a set of API's that make it easy for web developers to build solutions on top of Ravencoin.VinX is investigating the possibility of using Ravencoin to power their solution.

corby at 2:53 PM

will be exposed via insight-api which we've forked from @Under

[Master] Roshii at 2:53 PM

Something worth bringing up is that you will be able to get specific asset daba from full nodes with specific message protocols.

corby at 2:54 PM

also working on js lib for client side construction of asset transactions

Tron at 2:55 PM

Dev Kit will be an ongoing project so others can contribute and extend the APIs and capabilities of the 2nd layer.📷3📷3

RavencoinDev at 2:55 PM

Will be posted soon to the RavenProject GitHub.

corby at 2:55 PM

separate thing but yes Roshii that is worth mentioning -- network layer for getting asset data

RavencoinDev at 2:55 PM

Again want to give thanks to @Under for getting a great start on the project

push | ravenland.org at 2:56 PM

Yes looking forward to seeing more on the extensive api and capabilities, is there a wiki on this anywhere tron?(as to prevent other people replicating eachothers work?)

RavencoinDev at 2:56 PM

The wiki will be in the project on GitHub

push | ravenland.org at 2:56 PM

im guessing when the kit is released, something will appear, okok cool

RavencoinDevat 2:57 PM

Any questions about the Web DevKit?

push | ravenland.orgToday at 2:57 PM

well, what kind of support will it give us, that would be nice, is this written anywhereI'm still relatively new to blockchain<2 yearsso need some hand holding i suppose 📷

bhorn at 2:58 PM

right, what are initial use cases of the devkit?

push | ravenland.org at 2:58 PM

i mean im guessing metamask like capabilitysome kind of smart contract, some automation capabilitiesrpc scriptsstuff like thiseven if proof of concept or examplei guess im wondering if my hopes are realistic 📷

RavencoinDev at 2:59 PM

You can see the awesome work that @Under has already don that we are building on top of.

push | ravenland.org at 2:59 PM

yes @Under is truly a herooki, cool

RavencoinDev at 2:59 PM

https://ravencoin.network/Ravencoin Block ExplorerRavencoin Insight. View detailed information on all ravencoin transactions and blocks.

push | ravenland.org at 2:59 PM

ok, sweet, that is very encouragingthanks @Under for making that code public

corby at 3:00 PM

It will hopefully allow you to write all sorts of clients -- depending on complexity of use case you might just have js lib (wallet functions, ability to post txs to gateway) or a server side project (asset explorer or exchange)..(edited)

Tron at 3:00 PM

Yeah, thanks @Under .

RavencoinDev at 3:00 PM

What's your GitHub URL @Under ?

push | ravenland.org at 3:00 PM

https://github.com/underdarkskies/ i believeGitHub· GitHubunderdarkskies has 31 repositories available. Follow their code on GitHub.📷

RavencoinDev at 3:00 PM

Yup!

push | ravenland.org at 3:00 PM

he is truly a hero(edited)

RavencoinDev at 3:00 PM

LOL

push | ravenland.org at 3:00 PM

damn o'sgo missing everywhere

RavencoinDev at 3:01 PM

teh o's are hard... Just ask @Chatturga

push | ravenland.org at 3:01 PM

📷

Chatturga at 3:01 PM

O's arent the problem...

push | ravenland.org at 3:01 PM

📷📷

RavencoinDev at 3:02 PM

Alright we're at time and the devs are super busy. Thanks everybody for joining us.

push | ravenland.org at 3:02 PM

thanks guys

RavencoinDev at 3:02 PM

Thank you all for supporting the Raven community.📷6

corby at 3:02 PM

thanks all!

push | ravenland.org at 3:02 PM

keep up the awesome work, whilst bitcoin sv and bitcoin abc fight, another bitcoin fork raven, raven thru the night📷5

Vincent at 3:02 PM

piece!!

RavencoinDev at 3:03 PM

We're amazingly blessed to have you on this ride with us.📷5📷9📷5

Dan1666 at 3:03 PM

gg

BruceFenton at 3:03 PM

📷📷12📷4

UserJonPizza|MinePool.com|Mom at 3:55 PM

Good meeting! Excited for the new QT!!
submitted by Chatturga to u/Chatturga [link] [comments]

Harmony Project Update

Since our last newsletter, we have started open-sourcing our networking stack and exploring strategic partnership.
Here’re the highlights:
Started to open source our codebase & a new umbrella project called libunison,
Hired 2 new teammates & started targeting 100+ strategic partners,
Submitted an arxiv preprint of neuroscience paper & updated our testnet architecture,
Continued growing TGI community & conducting 10+ podcast interviews.

Open source & networking with libunison
We have identified data availability and block propagation to be the main bottlenecks of scaling transactions with tens of thousands of (some potentially malicious) nodes over the Internet. Our insight is to use the RaptorQ fountain code in conjunction with a forward error correction scheme for broadcasting message blocks, without incurring round-trip delays to recover from packet losses, over adversarial networks.
Here we’re launching our open source effort github.com/harmony-one with a go-raptorq wrapper under our umbrella project libunison (see our roadmap).
Our libunison is an end-to-end and peer-to-peer networking library for any application that needs to self-organize an emerging network of nodes. The library is built upon existing standardized technologies, including Host Identity Protocol (HIPv2) and Encryption Security Payload (ESP), to leverage decades of research, development and deployment insights.
Harmony is open sourcing libunison as one of the foundational layers of not only our network but also other performant, decentralized networks such as peer multicasts.
2 new teammates & 100+ strategic partners
Our team is growing! Chao Ma (Amazon AI engineer, Math Ph.D. at CU Boulder, non-linear analysis researcher) is joining the team to tackle protocol research and statistical consensus. Chao has been researching blockchain algorithms since 2017 and recently implemented a toy IPFS for fun.
So did our good friend Li Jiang (GSV Capital, logistics startup founder, Northwestern University adjunct, nickname 蒋·和梦·犁). Li has been our evangelist since our first China trip in February and finally decided to jump off the cliff to lead Harmony’s partnership efforts full time. As the newest node with awe on the Harmony team, Li also serves as “Chief Frisbee Officer”to keep us active in the winter.
We are planning our second token sale. Inspired by these insightful articles by Multicoin and by Notation on value-adding investors as operators, we are asking our new investors to operate Harmony nodes. Scalability and decentralization are the two most important metrics for Harmony to succeed. We will achieve both by having tens of thousands of nodes, the scale of Bitcoin and Ethereum, run by many independent entities in jurisdictions all over the world.
Having many nodes is key to network performance with our sharding approach; meanwhile having independent entities is key to network securitywith our permissionless principle. If you are non-US based and looking to participate in this strategic round, contact us at harmony.one/partners.
Neuroscience preprint & testnet architecture
Our colleague Prof. Lau has led our team with his research and submitted a paper “Blockchain and human episodic memory” (see preprint on arxiv) on relating brain consciousness to blockchain consensus. We highlight that certain phenomena studied in the brain, namely metacognition, reality monitoring, and how perceptual conscious experiences come about, may inspire development in blockchain technology too, specifically regarding probabilistic consensus protocols.
Our colleague Ka-yuet Liu, also at UCLA, has published Data Marketplace for Scientists in our blog. She highlights a modern economic theory of the nonrivalry of data, concluding that “blockchain can turn wasteful competition between large-scale science projects into synergy” among internationally recognized scientists like themselves.
Our testnet architecture has been updated to apply the latest research results and progress made by Ethereum 2.0. Zero-knowledge proofs by Starkwareare now fast enough to be generated on mobile clients and may be used to scale blockchains by many orders of magnitude. Fraud proofs (with 2D erasure codes and interleaved sampling), stateless clients (with algebraic vector commitment), comparing synchronous (with 10 exact round complexity vs previously 29) versus partially synchronous protocols, and integrating 99% fault tolerance (with hybrid threshold-dependent and latency-dependentconsensus) are on our roadmap.
Growing TGI community & 10+ interviews
Early this month, we had an in-depth founder interview with Hacker Noon, one of the most-read publications among engineers and entrepreneurs. On the topic of attracting users and building communities, we answered “some conversations are multiplicative — we multiply each other’s dreams. And every once in a while, a conversation is exponential, meaning we really build deep belief in each other’s vision and can make it come to life.”
Furthermore, Spencer writes about the Future of Scalable Blockchain and compares Harmony to Ethereum 2.0, Dfinity, Cardano and Nervos, complimenting that Harmony’s approach “is highly cerebral and in tune with the best technology currently available… their spirit of inclusion and entrepreneurship feels a bit more sincere.”

We continue to engage a global community to share the Harmony story. Here are just a few podcast interviewers and writers we are engaging with. Be sure to check out their work and keep an eye out as our stories will be published soon. Our conversations with these influencers span Silicon Valley, China, SE Asia, India, Australia and Brazil this month. Thanks to Jon Victor from The Information, Joyce Yang from Global Coin Research, Tushar Aggarwal from LunexVC & DecryptAsia, Brad Laurie also known as BlockchainBrad and Gerson Ribeiro from Startup de Alto Impacto for sharing our journey.
We are also hosting TGI-Blockchain on Saturdays now from 12pm to 4pm at our home-office for fellow founders and collaborators to deeply engage with each other. We are inspired by these builders presenting their works (sign up here!) every Saturday, including Timeless Protocol, Rational Mind, Blue Vista and Tara.AI in recent weeks.
Our team is sharing our learnings globally at a recent talk in India and upcoming events in Hong Kong and online with TokenGazer, as well as meeting our local friends from the ABC Blockchain and Xoogler communities.

Essential advice & your help
We’re taking the top two points from Y-Combinator’s Essential Startup Advice(posted next to our coffee machine) to heart: Launch now, and Build something people want.
We published a survey on blockchain testnets and we’re laser-focused on building our own public testnet, implementing information dispersal algorithm, state syncing and resharding at the moment.
Lastly, we need your help on hiring database engineers to hack Byzantine agreements and broadcasts, and on bringing in strategic investors to run Harmony nodes all over the world!
Stephen Tse Harmony CEO
https://harmony.one/
submitted by 67vader to CryptocurrencyICO [link] [comments]

In case you missed it: Major Crypto and Blockchain News from the week ending 12/14/2018

Developments in Financial Services

Regulatory Environment

General News


submitted by QuantalyticsResearch to CryptoCurrency [link] [comments]

Zcash - Privacy Based Blockchain Platform - YouTube Top 5 Altcoins Set To Explode in 2020  Best Cryptocurrency Investments 2020 JULY Bitcoin.com - Official Channel - YouTube ABC Blockchain Community - YouTube Best Time to Buy BITCOIN! CARDANO Blockchain Explorer Update - Crypto News

Coin: Bcash: Host: e5332793a3bb: Version / Commit / Build: devel / 37728b6 / 2020-09-03T12:17:04+00:00 Synchronized: true: Last Block: 658715: Last Block Update: Sun ... Look up Bitcoin SV (BSV) blocks, transactions, addresses, balances, nodes, OP_RETURN data and protocols, blockchain stats and charts Blockchair Bitcoin Cash block explorer. Lingo.Cash Lingo Cash offers translations in English, Chinese, Russian, Hebrew, and French. Payments acceptable in Bitcoin Cash only. Games. SatoshiDICE SatoshiDice first started in 2012 as a way for people to play online using Bitcoin. At the time the game was built on top of BTC, but as fees for BTC rose and became unsustainable SatoshiDice closed and ... Bitcoin Cash Explorer The Bitcoin.com Explorer provides block, transaction, and address data for the Bitcoin Cash (BCH) and Bitcoin (BTC) chains. The data is displayed within an awesome interface and is available in several different languages.

[index] [47777] [27873] [38928] [9112] [4037] [33218] [39248] [14030] [29317] [35706]

Zcash - Privacy Based Blockchain Platform - YouTube

Julian Hosp - Blockchain, Krypto, Bitcoin 152,238 views 10:54 Was ist ein SOFT-FORK HARD-FORK🍴 und wo ist der Unterschied ?🤓Bitcoin Ethereum Fork einfach erklär - Duration: 6:05. #bitcoin #crypto #cryptocurrencies In this video I give my top 5 exchanges for buying Bitcoin & general cryptocurrencies! These exchanges are aimed at beginn... Don't buy Bitcoin at the top, buy it right now, says Brian Kelly. Mattie will also talk about the SEC and the postponed ETF deadline, as well as Tether admitting to using reserves to buy Bitcoin ... We’ll learn how to use the Zchain block explorer and its API, verify accounts on the public blockchain, how to purchase Zcash using Jaxx and more! less This series explores Zcash, a privacy ... 💲Best Crypto Exchanges for Buying Altcoins & Trading Cryptos💲: ... Bitcoin ABC Vs Bitcoin SV - Differences Explained (2019) - Duration: 7:09. Barnyard FFIO 986 views. 7:09. How to Buy ...

#