文章前言
区块链是由包含交易的区块按照时间先后顺序依次连接起来的数据结构,这种数据结构是一个形象的链表结构,所有数据有序地链接在同一条区块链上,每个区块通过一个hash指针指向前一个区块,hash指针其实是前一个区块头进行SHA256哈希计算得到的,通过这个哈希值,可以唯一的识别一个区块,然后将每个区块连接到其区块头中前一个区块哈希值代币的区块后面,从而构建出一条完整的区块链。
区块结构
区块是区块链中数据存储的最小单元,每一个区块都由\\”区块头\\”和\\”区块主体\\”两部分组成,其中区块主体用于存储交易信息,而区块头由版本号、父区块Hash值、Merkle root、时间戳、难度值、随机数等构成:
注意:下图中区块Hash是使用SHA256算法对区块头进行二次哈希得到的32位哈希值,也被称为区块头哈希值,而不是整个区块的HASH
区块数据结构定义如下(这里的hash、size、td等都是在接受和验证区块过程中产生的内容,在向全网公布的时候block就只有header和body):
// filedir: go-ethereum-1.10.2\\\\core\\\\types\\\\block.go L149// Block represents an entire block in the Ethereum blockchain.type Block struct {header *Headeruncles []*Headertransactions Transactions// cacheshash atomic.Valuesize atomic.Value// Td is used by package core to store the total difficulty// of the chain up to and including the block.td *big.Int// These fields are used by package eth to track// inter-peer block relay.ReceivedAt time.TimeReceivedFrom interface{}}
区块头数据结构的设计如下:
// filedir:go-ethereum-1.10.2\\\\core\\\\types\\\\block.go L66//go:generate gencodec -type Header -field-override headerMarshaling -out gen_header_json.go// Header represents a block header in the Ethereum blockchain.type Header struct {ParentHash common.Hash `json:\\\"parentHash\\\" gencodec:\\\"required\\\"`UncleHash common.Hash `json:\\\"sha3Uncles\\\" gencodec:\\\"required\\\"`Coinbase common.Address `json:\\\"miner\\\" gencodec:\\\"required\\\"`Root common.Hash `json:\\\"stateRoot\\\" gencodec:\\\"required\\\"`TxHash common.Hash `json:\\\"transactionsRoot\\\" gencodec:\\\"required\\\"`ReceiptHash common.Hash `json:\\\"receiptsRoot\\\" gencodec:\\\"required\\\"`Bloom Bloom `json:\\\"logsBloom\\\" gencodec:\\\"required\\\"`Difficulty *big.Int `json:\\\"difficulty\\\" gencodec:\\\"required\\\"`Number *big.Int `json:\\\"number\\\" gencodec:\\\"required\\\"`GasLimit uint64 `json:\\\"gasLimit\\\" gencodec:\\\"required\\\"`GasUsed uint64 `json:\\\"gasUsed\\\" gencodec:\\\"required\\\"`Time uint64 `json:\\\"timestamp\\\" gencodec:\\\"required\\\"`Extra []byte `json:\\\"extraData\\\" gencodec:\\\"required\\\"`MixDigest common.Hash `json:\\\"mixHash\\\"`Nonce BlockNonce `json:\\\"nonce\\\"`}
参数说明:
-
ParentHash:父区块Hash值
-
Coinbase:矿工账户地址
-
UncleHash:Block结构体的成员Uncles的RLP(递归长度前缀)哈希值
-
Root:Merkle Tree Root
-
TxHash:区块中所有交易验证结果组成的交易结果的默克尔树
-
ReceiptHash:Block中的\\”Receipt Trie\\”的根节点的RLP哈希值
-
Bloom:Bloom过滤器(Filter),用来快速判断一个参数Log对象是否存在于一组已知的Log集合中
-
Difficulty:区块的难度
-
Number:区块的序号,当前区块的Number等于其父区块Number+1
-
Time:区块被创建的时间戳,该值由共识算法确定,要么等于parentBlock.Time + 10s,或者等于系统当前时间
-
GasLimit:区块内所有Gas消耗的理论上限
-
GasUsed:区块内所有Transaction执行时所实际消耗的Gas总和
-
Nonce:一个64bit的哈希数,它被应用在区块的\\”挖掘\\”阶段,并且在使用中会被修改
同时需要补充说明的一点是在比特币中区块body中的交易通过Merkle Tree的形式组织,之后将Merkle Root存储到Block header中,而在以太坊中则采用Merkle-PatricaTrie(MPT)结构,一共存在三棵树:
-
StateTrie:在StateDB中每个账户以stateObject对象表示,所有账户对象逐个插入一个Merkle-PatricaTrie(MPT)结构里,形成\\”state Trie\\”
-
Tx Trie:Block的transactions中所有的tx对象,被逐个插入一个MPT结构,形成\\”tx Trie\\”
-
Receipt Trie:Transaction执行完后会生成一个Receipt数组,数组中的所有Receipt被逐个插入一个MPT结构中,形成\\”Receipt Trie\\”
关于这三棵树我们在后续的交易部分再进行深究,这里不再研讨~
创世区块
区块链中的第一个区块被称为\\”创世区块\\”,它是区块链里面所有区块的共同的祖先,当我们启动一个节点而不指定其创世区块文件(genesis.json)时,节点回尝试先从本地LevelDB数据库中加载区块信息,如果节点没有从LevelDB中获取到区块,则节点回认为自己是一个全新的节点,此时会根据硬编码的创世区块信息初始化本地链:
// filedir: go-ethereum-1.10.2\\\\params\\\\config.go L28// Genesis hashes to enforce below configs on.var (MainnetGenesisHash = common.HexToHash(\\\"0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3\\\")RopstenGenesisHash = common.HexToHash(\\\"0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d\\\")RinkebyGenesisHash = common.HexToHash(\\\"0x6341fd3daf94b748c72ced5a5b26028f2474f5f00d824504e4fa37a75767e177\\\")GoerliGenesisHash = common.HexToHash(\\\"0xbf7e331f7f7c1dd2e05159666b3bf8bc7a8a3a9eb1d518969eab529dd9b88c1a\\\")YoloV3GenesisHash = common.HexToHash(\\\"0x374f07cc7fa7c251fc5f36849f574b43db43600526410349efdca2bcea14101a\\\"))// TrustedCheckpoints associates each known checkpoint with the genesis hash of// the chain it belongs to.var TrustedCheckpoints = map[common.Hash]*TrustedCheckpoint{MainnetGenesisHash: MainnetTrustedCheckpoint,RopstenGenesisHash: RopstenTrustedCheckpoint,RinkebyGenesisHash: RinkebyTrustedCheckpoint,GoerliGenesisHash: GoerliTrustedCheckpoint,}// CheckpointOracles associates each known checkpoint oracles with the genesis hash of// the chain it belongs to.var CheckpointOracles = map[common.Hash]*CheckpointOracleConfig{MainnetGenesisHash: MainnetCheckpointOracle,RopstenGenesisHash: RopstenCheckpointOracle,RinkebyGenesisHash: RinkebyCheckpointOracle,GoerliGenesisHash: GoerliCheckpointOracle,}var (// MainnetChainConfig is the chain parameters to run a node on the main network.MainnetChainConfig = &ChainConfig{ChainID: big.NewInt(1),HomesteadBlock: big.NewInt(1_150_000),DAOForkBlock: big.NewInt(1_920_000),DAOForkSupport: true,EIP150Block: big.NewInt(2_463_000),EIP150Hash: common.HexToHash(\\\"0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0\\\"),EIP155Block: big.NewInt(2_675_000),EIP158Block: big.NewInt(2_675_000),ByzantiumBlock: big.NewInt(4_370_000),ConstantinopleBlock: big.NewInt(7_280_000),PetersburgBlock: big.NewInt(7_280_000),IstanbulBlock: big.NewInt(9_069_000),MuirGlacierBlock: big.NewInt(9_200_000),BerlinBlock: big.NewInt(12_244_000),Ethash: new(EthashConfig),}// MainnetTrustedCheckpoint contains the light client trusted checkpoint for the main network.MainnetTrustedCheckpoint = &TrustedCheckpoint{SectionIndex: 371,SectionHead: common.HexToHash(\\\"0x50fd3cec5376ede90ef9129772022690cd1467f22c18abb7faa11e793c51e9c9\\\"),CHTRoot: common.HexToHash(\\\"0xb57b4b22a77b5930847b1ca9f62daa11eae6578948cb7b18997f2c0fe5757025\\\"),BloomRoot: common.HexToHash(\\\"0xa338f8a868a194fa90327d0f5877f656a9f3640c618d2a01a01f2e76ef9ef954\\\"),}// MainnetCheckpointOracle contains a set of configs for the main network oracle.MainnetCheckpointOracle = &CheckpointOracleConfig{Address: common.HexToAddress(\\\"0x9a9070028361F7AAbeB3f2F2Dc07F82C4a98A02a\\\"),Signers: []common.Address{common.HexToAddress(\\\"0x1b2C260efc720BE89101890E4Db589b44E950527\\\"), // Petercommon.HexToAddress(\\\"0x78d1aD571A1A09D60D9BBf25894b44e4C8859595\\\"), // Martincommon.HexToAddress(\\\"0x286834935f4A8Cfb4FF4C77D5770C2775aE2b0E7\\\"), // Zsoltcommon.HexToAddress(\\\"0xb86e2B0Ab5A4B1373e40c51A7C712c70Ba2f9f8E\\\"), // Garycommon.HexToAddress(\\\"0x0DF8fa387C602AE62559cC4aFa4972A7045d6707\\\"), // Guillaume},Threshold: 2,}// RopstenChainConfig contains the chain parameters to run a node on the Ropsten test network.RopstenChainConfig = &ChainConfig{ChainID: big.NewInt(3),HomesteadBlock: big.NewInt(0),DAOForkBlock: nil,DAOForkSupport: true,EIP150Block: big.NewInt(0),EIP150Hash: common.HexToHash(\\\"0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d\\\"),EIP155Block: big.NewInt(10),EIP158Block: big.NewInt(10),ByzantiumBlock: big.NewInt(1_700_000),ConstantinopleBlock: big.NewInt(4_230_000),PetersburgBlock: big.NewInt(4_939_394),IstanbulBlock: big.NewInt(6_485_846),MuirGlacierBlock: big.NewInt(7_117_117),BerlinBlock: big.NewInt(9_812_189),Ethash: new(EthashConfig),}// RopstenTrustedCheckpoint contains the light client trusted checkpoint for the Ropsten test network.RopstenTrustedCheckpoint = &TrustedCheckpoint{SectionIndex: 279,SectionHead: common.HexToHash(\\\"0x4a4912848d4c06090097073357c10015d11c6f4544a0f93cbdd584701c3b7d58\\\"),CHTRoot: common.HexToHash(\\\"0x9053b7867ae921e80a4e2f5a4b15212e4af3d691ca712fb33dc150e9c6ea221c\\\"),BloomRoot: common.HexToHash(\\\"0x3dc04cb1be7ddc271f3f83469b47b76184a79d7209ef51d85b1539ea6d25a645\\\"),}// RopstenCheckpointOracle contains a set of configs for the Ropsten test network oracle.RopstenCheckpointOracle = &CheckpointOracleConfig{Address: common.HexToAddress(\\\"0xEF79475013f154E6A65b54cB2742867791bf0B84\\\"),Signers: []common.Address{common.HexToAddress(\\\"0x32162F3581E88a5f62e8A61892B42C46E2c18f7b\\\"), // Petercommon.HexToAddress(\\\"0x78d1aD571A1A09D60D9BBf25894b44e4C8859595\\\"), // Martincommon.HexToAddress(\\\"0x286834935f4A8Cfb4FF4C77D5770C2775aE2b0E7\\\"), // Zsoltcommon.HexToAddress(\\\"0xb86e2B0Ab5A4B1373e40c51A7C712c70Ba2f9f8E\\\"), // Garycommon.HexToAddress(\\\"0x0DF8fa387C602AE62559cC4aFa4972A7045d6707\\\"), // Guillaume},Threshold: 2,}// RinkebyChainConfig contains the chain parameters to run a node on the Rinkeby test network.RinkebyChainConfig = &ChainConfig{ChainID: big.NewInt(4),HomesteadBlock: big.NewInt(1),DAOForkBlock: nil,DAOForkSupport: true,EIP150Block: big.NewInt(2),EIP150Hash: common.HexToHash(\\\"0x9b095b36c15eaf13044373aef8ee0bd3a382a5abb92e402afa44b8249c3a90e9\\\"),EIP155Block: big.NewInt(3),EIP158Block: big.NewInt(3),ByzantiumBlock: big.NewInt(1_035_301),ConstantinopleBlock: big.NewInt(3_660_663),PetersburgBlock: big.NewInt(4_321_234),IstanbulBlock: big.NewInt(5_435_345),MuirGlacierBlock: nil,BerlinBlock: big.NewInt(8_290_928),Clique: &CliqueConfig{Period: 15,Epoch: 30000,},}// RinkebyTrustedCheckpoint contains the light client trusted checkpoint for the Rinkeby test network.RinkebyTrustedCheckpoint = &TrustedCheckpoint{SectionIndex: 254,SectionHead: common.HexToHash(\\\"0x0cba01dd71baa22ac8fa0b105bc908e94f9ecfbc79b4eb97427fe07b5851dd10\\\"),CHTRoot: common.HexToHash(\\\"0x5673d8fc49c9c7d8729068640e4b392d46952a5a38798973bac1cf1d0d27ad7d\\\"),BloomRoot: common.HexToHash(\\\"0x70e01232b66df9a7778ae3291c9217afb9a2d9f799f32d7b912bd37e7bce83a8\\\"),}// RinkebyCheckpointOracle contains a set of configs for the Rinkeby test network oracle.RinkebyCheckpointOracle = &CheckpointOracleConfig{Address: common.HexToAddress(\\\"0xebe8eFA441B9302A0d7eaECc277c09d20D684540\\\"),Signers: []common.Address{common.HexToAddress(\\\"0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3\\\"), // Petercommon.HexToAddress(\\\"0x78d1aD571A1A09D60D9BBf25894b44e4C8859595\\\"), // Martincommon.HexToAddress(\\\"0x286834935f4A8Cfb4FF4C77D5770C2775aE2b0E7\\\"), // Zsoltcommon.HexToAddress(\\\"0xb86e2B0Ab5A4B1373e40c51A7C712c70Ba2f9f8E\\\"), // Gary},Threshold: 2,}// GoerliChainConfig contains the chain parameters to run a node on the G?rli test network.GoerliChainConfig = &ChainConfig{ChainID: big.NewInt(5),HomesteadBlock: big.NewInt(0),DAOForkBlock: nil,DAOForkSupport: true,EIP150Block: big.NewInt(0),EIP155Block: big.NewInt(0),EIP158Block: big.NewInt(0),ByzantiumBlock: big.NewInt(0),ConstantinopleBlock: big.NewInt(0),PetersburgBlock: big.NewInt(0),IstanbulBlock: big.NewInt(1_561_651),MuirGlacierBlock: nil,BerlinBlock: big.NewInt(4_460_644),Clique: &CliqueConfig{Period: 15,Epoch: 30000,},}// GoerliTrustedCheckpoint contains the light client trusted checkpoint for the G?rli test network.GoerliTrustedCheckpoint = &TrustedCheckpoint{SectionIndex: 138,SectionHead: common.HexToHash(\\\"0xb7ea0566abd7d0def5b3c9afa3431debb7bb30b65af35f106ca93a59e6c859a7\\\"),CHTRoot: common.HexToHash(\\\"0x378c7ea9081242beb982e2e39567ba12f2ed3e59e5aba3f9db1d595646d7c9f4\\\"),BloomRoot: common.HexToHash(\\\"0x523c169286cfca52e8a6579d8c35dc8bf093412d8a7478163bfa81ae91c2492d\\\"),}// GoerliCheckpointOracle contains a set of configs for the Goerli test network oracle.GoerliCheckpointOracle = &CheckpointOracleConfig{Address: common.HexToAddress(\\\"0x18CA0E045F0D772a851BC7e48357Bcaab0a0795D\\\"),Signers: []common.Address{common.HexToAddress(\\\"0x4769bcaD07e3b938B7f43EB7D278Bc7Cb9efFb38\\\"), // Petercommon.HexToAddress(\\\"0x78d1aD571A1A09D60D9BBf25894b44e4C8859595\\\"), // Martincommon.HexToAddress(\\\"0x286834935f4A8Cfb4FF4C77D5770C2775aE2b0E7\\\"), // Zsoltcommon.HexToAddress(\\\"0xb86e2B0Ab5A4B1373e40c51A7C712c70Ba2f9f8E\\\"), // Garycommon.HexToAddress(\\\"0x0DF8fa387C602AE62559cC4aFa4972A7045d6707\\\"), // Guillaume},Threshold: 2,}// YoloV3ChainConfig contains the chain parameters to run a node on the YOLOv3 test network.YoloV3ChainConfig = &ChainConfig{ChainID: new(big.Int).SetBytes([]byte(\\\"yolov3x\\\")),HomesteadBlock: big.NewInt(0),DAOForkBlock: nil,DAOForkSupport: true,EIP150Block: big.NewInt(0),EIP155Block: big.NewInt(0),EIP158Block: big.NewInt(0),ByzantiumBlock: big.NewInt(0),ConstantinopleBlock: big.NewInt(0),PetersburgBlock: big.NewInt(0),IstanbulBlock: big.NewInt(0),MuirGlacierBlock: nil,BerlinBlock: nil, // Don\\\'t enable Berlin directly, we\\\'re YOLOing itYoloV3Block: big.NewInt(0),Clique: &CliqueConfig{Period: 15,Epoch: 30000,},}// AllEthashProtocolChanges contains every protocol change (EIPs) introduced// and accepted by the Ethereum core developers into the Ethash consensus.//// This configuration is intentionally not using keyed fields to force anyone// adding flags to the config to also have to set these fields.AllEthashProtocolChanges = &ChainConfig{big.NewInt(1337), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, new(EthashConfig), nil}// AllCliqueProtocolChanges contains every protocol change (EIPs) introduced// and accepted by the Ethereum core developers into the Clique consensus.//// This configuration is intentionally not using keyed fields to force anyone// adding flags to the config to also have to set these fields.AllCliqueProtocolChanges = &ChainConfig{big.NewInt(1337), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, &CliqueConfig{Period: 0, Epoch: 30000}}TestChainConfig = &ChainConfig{big.NewInt(1), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, new(EthashConfig), nil}TestRules = TestChainConfig.Rules(new(big.Int)))
当然,我们也可以在启动的时候直接指定创世区块文件并通过init参数进行初始化操作,下面我们跟踪一下其具体流程
./geth init <genesispath>
执行上述命令之后进入到命令解析阶段:
// filedir: go-ethereum-1.10.2\\\\cmd\\\\geth\\\\chaincmd.go L38var (initCommand = cli.Command{Action: utils.MigrateFlags(initGenesis),Name: \\\"init\\\",Usage: \\\"Bootstrap and initialize a new genesis block\\\",ArgsUsage: \\\"<genesisPath>\\\",Flags: []cli.Flag{utils.DataDirFlag,},Category: \\\"BLOCKCHAIN COMMANDS\\\",Description: `The init command initializes a new genesis block and definition for the network.This is a destructive action and changes the network in which you will beparticipating.It expects the genesis file as argument.`,}
之后再initGenesis中应用我们自定义的创世文件,在这里会检查创世文件的格式、是否具备读的权限、调用makeConfigNode来加载并应用全局配置信息,之后调用SetupGenesisBlock来创建创世区块:
// filedir:go-ethereum-1.10.2\\\\cmd\\\\geth\\\\chaincmd.go L172// initGenesis will initialise the given JSON format genesis file and writes it as// the zero\\\'d block (i.e. genesis) or will fail hard if it can\\\'t succeed.func initGenesis(ctx *cli.Context) error {// Make sure we have a valid genesis JSONgenesisPath := ctx.Args().First()if len(genesisPath) == 0 {utils.Fatalf(\\\"Must supply path to genesis JSON file\\\")}file, err := os.Open(genesisPath)if err != nil {utils.Fatalf(\\\"Failed to read genesis file: %v\\\", err)}defer file.Close()genesis := new(core.Genesis)if err := json.NewDecoder(file).Decode(genesis); err != nil {utils.Fatalf(\\\"invalid genesis file: %v\\\", err)}// Open and initialise both full and light databasesstack, _ := makeConfigNode(ctx)defer stack.Close()for _, name := range []string{\\\"chaindata\\\", \\\"lightchaindata\\\"} {chaindb, err := stack.OpenDatabase(name, 0, 0, \\\"\\\", false)if err != nil {utils.Fatalf(\\\"Failed to open database: %v\\\", err)}_, hash, err := core.SetupGenesisBlock(chaindb, genesis)if err != nil {utils.Fatalf(\\\"Failed to write genesis block: %v\\\", err)}chaindb.Close()log.Info(\\\"Successfully wrote genesis state\\\", \\\"database\\\", name, \\\"hash\\\", hash)}return nil}
SetupGenesisBlock代码如下所示,此时如果没有存储gensis块则提交新的块即可,此时如果Genesis为null,则加载默认主网配置,如果不为null则调用commit新建创世区块:
// filedir: go-ethereum-1.10.2\\\\core\\\\genesis.go L142// SetupGenesisBlock writes or updates the genesis block in db.// The block that will be used is://// genesis == nil genesis != nil// +------------------------------------------// db has no genesis | main-net default | genesis// db has genesis | from DB | genesis (if compatible)//// The stored chain configuration will be updated if it is compatible (i.e. does not// specify a fork block below the local head block). In case of a conflict, the// error is a *params.ConfigCompatError and the new, unwritten config is returned.//// The returned chain configuration is never nil.func SetupGenesisBlock(db ethdb.Database, genesis *Genesis) (*params.ChainConfig, common.Hash, error) {return SetupGenesisBlockWithOverride(db, genesis, nil)}func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, overrideBerlin *big.Int) (*params.ChainConfig, common.Hash, error) {if genesis != nil && genesis.Config == nil {return params.AllEthashProtocolChanges, common.Hash{}, errGenesisNoConfig}// Just commit the new block if there is no stored genesis block.stored := rawdb.ReadCanonicalHash(db, 0)if (stored == common.Hash{}) {if genesis == nil {log.Info(\\\"Writing default main-net genesis block\\\")genesis = DefaultGenesisBlock()} else {log.Info(\\\"Writing custom genesis block\\\")}block, err := genesis.Commit(db)if err != nil {return genesis.Config, common.Hash{}, err}return genesis.Config, block.Hash(), nil}// We have the genesis block in database(perhaps in ancient database)// but the corresponding state is missing.header := rawdb.ReadHeader(db, stored, 0)if _, err := state.New(header.Root, state.NewDatabaseWithConfig(db, nil), nil); err != nil {if genesis == nil {genesis = DefaultGenesisBlock()}// Ensure the stored genesis matches with the given one.hash := genesis.ToBlock(nil).Hash()if hash != stored {return genesis.Config, hash, &GenesisMismatchError{stored, hash}}block, err := genesis.Commit(db)if err != nil {return genesis.Config, hash, err}return genesis.Config, block.Hash(), nil}// Check whether the genesis block is already written.if genesis != nil {hash := genesis.ToBlock(nil).Hash()if hash != stored {return genesis.Config, hash, &GenesisMismatchError{stored, hash}}}// Get the existing chain configuration.newcfg := genesis.configOrDefault(stored)if overrideBerlin != nil {newcfg.BerlinBlock = overrideBerlin}if err := newcfg.CheckConfigForkOrder(); err != nil {return newcfg, common.Hash{}, err}storedcfg := rawdb.ReadChainConfig(db, stored)if storedcfg == nil {log.Warn(\\\"Found genesis block without chain config\\\")rawdb.WriteChainConfig(db, stored, newcfg)return newcfg, stored, nil}// Special case: don\\\'t change the existing config of a non-mainnet chain if no new// config is supplied. These chains would get AllProtocolChanges (and a compat error)// if we just continued here.if genesis == nil && stored != params.MainnetGenesisHash {return storedcfg, stored, nil}// Check config compatibility and write the config. Compatibility errors// are returned to the caller unless we\\\'re already at block zero.height := rawdb.ReadHeaderNumber(db, rawdb.ReadHeadHeaderHash(db))if height == nil {return newcfg, stored, fmt.Errorf(\\\"missing block number for head header hash\\\")}compatErr := storedcfg.CheckCompatible(newcfg, *height)if compatErr != nil && *height != 0 && compatErr.RewindTo != 0 {return newcfg, stored, compatErr}rawdb.WriteChainConfig(db, stored, newcfg)return newcfg, stored, nil}
DefaultGenesisBlock(默认创世区块)信息如下:
// filedir:go-ethereum-1.10.2\\\\core\\\\genesis.go L338// DefaultGenesisBlock returns the Ethereum main net genesis block.func DefaultGenesisBlock() *Genesis {return &Genesis{Config: params.MainnetChainConfig,Nonce: 66,ExtraData: hexutil.MustDecode(\\\"0x11bbe8db4e347b4e8c937c1c8370e4b5ed33adb3db69cbdb7a38e1e50b1b82fa\\\"),GasLimit: 5000,Difficulty: big.NewInt(17179869184),Alloc: decodePrealloc(mainnetAllocData),}}
提交区块:
// The block is committed as the canonical head block.func (g *Genesis) Commit(db ethdb.Database) (*types.Block, error) {block := g.ToBlock(db)if block.Number().Sign() != 0 {return nil, fmt.Errorf(\\\"can\\\'t commit genesis block with number > 0\\\")}config := g.Configif config == nil {config = params.AllEthashProtocolChanges}if err := config.CheckConfigForkOrder(); err != nil {return nil, err}rawdb.WriteTd(db, block.Hash(), block.NumberU64(), g.Difficulty)rawdb.WriteBlock(db, block)rawdb.WriteReceipts(db, block.Hash(), block.NumberU64(), nil)rawdb.WriteCanonicalHash(db, block.Hash(), block.NumberU64())rawdb.WriteHeadBlockHash(db, block.Hash())rawdb.WriteHeadFastBlockHash(db, block.Hash())rawdb.WriteHeadHeaderHash(db, block.Hash())rawdb.WriteChainConfig(db, block.Hash(), config)return block, nil}
新建区块
新区块是由矿工打包而成,这里我们暂时不对挖矿流程进行深究,只对挖矿过程中新区块的产生进行简要分析,这里的mainLoop是一个goroutine,它会打包新生成的交易信息到区块中,在这里会调用updateSnapshot函数(更新快照)
// filedir:go-ethereum-1.10.2\\\\miner\\\\worker.go L432// mainLoop is a standalone goroutine to regenerate the sealing task based on the received event.func (w *worker) mainLoop() {defer w.txsSub.Unsubscribe()defer w.chainHeadSub.Unsubscribe()defer w.chainSideSub.Unsubscribe()for {select {case req := <-w.newWorkCh:w.commitNewWork(req.interrupt, req.noempty, req.timestamp)case ev := <-w.chainSideCh:......case ev := <-w.txsCh:// Apply transactions to the pending state if we\\\'re not mining.//// Note all transactions received may not be continuous with transactions// already included in the current mining block. These transactions will// be automatically eliminated.if !w.isRunning() && w.current != nil {// If block is already full, abortif gp := w.current.gasPool; gp != nil && gp.Gas() < params.TxGas {continue}......if tcount != w.current.tcount {w.updateSnapshot()}......// System stoppedcase <-w.exitCh:returncase <-w.txsSub.Err():returncase <-w.chainHeadSub.Err():returncase <-w.chainSideSub.Err():return}}}
updateSnapshot函数的实现如下,在这里会调用NewBlock来更新block信息也就是新区块的信息:
// filedir:go-ethereum-1.10.2\\\\miner\\\\worker.go L705// updateSnapshot updates pending snapshot block and state.// Note this function assumes the current variable is thread safe.func (w *worker) updateSnapshot() {w.snapshotMu.Lock()defer w.snapshotMu.Unlock()var uncles []*types.Headerw.current.uncles.Each(func(item interface{}) bool {hash, ok := item.(common.Hash)if !ok {return false}uncle, exist := w.localUncles[hash]if !exist {uncle, exist = w.remoteUncles[hash]}if !exist {return false}uncles = append(uncles, uncle.Header())return false})w.snapshotBlock = types.NewBlock(w.current.header,w.current.txs,uncles,w.current.receipts,trie.NewStackTrie(nil),)w.snapshotState = w.current.state.Copy()}
NewBlock如下所示,这里会更新TxHash、transactions、header.ReceiptHash、header.Bloom等区块信息:
// filedir:go-ethereum-1.10.2\\\\core\\\\types\\\\block.go L198// NewBlock creates a new block. The input data is copied,// changes to header and to the field values will not affect the// block.//// The values of TxHash, UncleHash, ReceiptHash and Bloom in header// are ignored and set to values derived from the given txs, uncles// and receipts.func NewBlock(header *Header, txs []*Transaction, uncles []*Header, receipts []*Receipt, hasher TrieHasher) *Block {b := &Block{header: CopyHeader(header), td: new(big.Int)}// TODO: panic if len(txs) != len(receipts)if len(txs) == 0 {b.header.TxHash = EmptyRootHash} else {b.header.TxHash = DeriveSha(Transactions(txs), hasher)b.transactions = make(Transactions, len(txs))copy(b.transactions, txs)}if len(receipts) == 0 {b.header.ReceiptHash = EmptyRootHash} else {b.header.ReceiptHash = DeriveSha(Receipts(receipts), hasher)b.header.Bloom = CreateBloom(receipts)}if len(uncles) == 0 {b.header.UncleHash = EmptyUncleHash} else {b.header.UncleHash = CalcUncleHash(uncles)b.uncles = make([]*Header, len(uncles))for i := range uncles {b.uncles[i] = CopyHeader(uncles[i])}}return b}
区块验证
区块验证时保证区块链不产生分叉的重要手段,如果没有区块验证过程,则在同步区块的过程中节点间会产生较多的分叉,我们知道分叉会对区块链和财产安全造成极大的威胁,一般在以下四种情况下会对区块进行验证:
-
挖矿节点在成功挖掘到一个区块并向链上提交区块时,节点回先校验区块是否合法
-
用户通过API接口向节点提交区块到区块链时,节点回验证区块是否合法
-
同步区块时,节点收到其他节点同步过来的区块,节点会先验证同步的区块是否合法,如果合法则将其加入到本地链中
-
矿池中的节点向矿池提交工作时,矿池会验证矿机提交的区块
以太坊中区块的验证大体上可以分为区块头和区块体的验证,区块body的验证逻辑如下所示,它会校验给定块的叔区块并验证该块:
// filedir:go-ethereum-1.10.2\\\\core\\\\block_validator.go L48// ValidateBody validates the given block\\\'s uncles and verifies the block// header\\\'s transaction and uncle roots. The headers are assumed to be already// validated at this point.func (v *BlockValidator) ValidateBody(block *types.Block) error {// Check whether the block\\\'s known, and if not, that it\\\'s linkableif v.bc.HasBlockAndState(block.Hash(), block.NumberU64()) {return ErrKnownBlock}// Header validity is known at this point, check the uncles and transactionsheader := block.Header()if err := v.engine.VerifyUncles(v.bc, block); err != nil {return err}if hash := types.CalcUncleHash(block.Uncles()); hash != header.UncleHash {return fmt.Errorf(\\\"uncle root hash mismatch: have %x, want %x\\\", hash, header.UncleHash)}if hash := types.DeriveSha(block.Transactions(), trie.NewStackTrie(nil)); hash != header.TxHash {return fmt.Errorf(\\\"transaction root hash mismatch: have %x, want %x\\\", hash, header.TxHash)}if !v.bc.HasBlockAndState(block.ParentHash(), block.NumberU64()-1) {if !v.bc.HasBlock(block.ParentHash(), block.NumberU64()-1) {return consensus.ErrUnknownAncestor}return consensus.ErrPrunedAncestor}return nil}
VerifyUncles验证逻辑如下,在这里会验证当前区块是否最多包含由两个叔区块、收集之前的叔区块等:
// VerifyUncles verifies that the given block\\\'s uncles conform to the consensus// rules of the stock Ethereum ethash engine.func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {// If we\\\'re running a full engine faking, accept any input as validif ethash.config.PowMode == ModeFullFake {return nil}// Verify that there are at most 2 uncles included in this blockif len(block.Uncles()) > maxUncles {return errTooManyUncles}if len(block.Uncles()) == 0 {return nil}// Gather the set of past uncles and ancestorsuncles, ancestors := mapset.NewSet(), make(map[common.Hash]*types.Header)number, parent := block.NumberU64()-1, block.ParentHash()for i := 0; i < 7; i++ {ancestor := chain.GetBlock(parent, number)if ancestor == nil {break}ancestors[ancestor.Hash()] = ancestor.Header()for _, uncle := range ancestor.Uncles() {uncles.Add(uncle.Hash())}parent, number = ancestor.ParentHash(), number-1}ancestors[block.Hash()] = block.Header()uncles.Add(block.Hash())// Verify each of the uncles that it\\\'s recent, but not an ancestorfor _, uncle := range block.Uncles() {// Make sure every uncle is rewarded only oncehash := uncle.Hash()if uncles.Contains(hash) {return errDuplicateUncle}uncles.Add(hash)// Make sure the uncle has a valid ancestryif ancestors[hash] != nil {return errUncleIsAncestor}if ancestors[uncle.ParentHash] == nil || uncle.ParentHash == block.ParentHash() {return errDanglingUncle}if err := ethash.verifyHeader(chain, uncle, ancestors[uncle.ParentHash], true, true, time.Now().Unix()); err != nil {return err}}return nil}
之后调用verifyHeader来验证区块头信息,逻辑设计如下:
// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L88// VerifyHeader checks whether a header conforms to the consensus rules of the// stock Ethereum ethash engine.func (ethash *Ethash) VerifyHeader(chain consensus.ChainHeaderReader, header *types.Header, seal bool) error {// If we\\\'re running a full engine faking, accept any input as validif ethash.config.PowMode == ModeFullFake {return nil}// Short circuit if the header is known, or its parent notnumber := header.Number.Uint64()if chain.GetHeader(header.Hash(), number) != nil {return nil}parent := chain.GetHeader(header.ParentHash, number-1)if parent == nil {return consensus.ErrUnknownAncestor}// Sanity checks passed, do a proper verificationreturn ethash.verifyHeader(chain, header, parent, false, seal, time.Now().Unix())}
在上述代码中会校验区块hash是否已经存在,之后通过chain.GetHeader来获取父区块的Hash值,之后检查父区块是否存在,然后调用verifyHeader进行后续检查,具体代码如下,这里会后续检查区块头中的Extra(额外可附加数据)是否超过最大范围,之后检查区块头的时间戳、检查区块难度(根据区块时间戳和父块的难度验证块的难度的合法性)、验证gas上限、验证使用的Gas是否超过Gaslimit、gaslimit是否在允许范围之内、验证区块的编号是否是父区块+1、验证区块是否满足共识要求 、如果全部通过则验证关键的区块头字段信息以及硬分叉:
// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L242// verifyHeader checks whether a header conforms to the consensus rules of the// stock Ethereum ethash engine.// See YP section 4.3.4. \\\"Block Header Validity\\\"func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, parent *types.Header, uncle bool, seal bool, unixNow int64) error {// Ensure that the header\\\'s extra-data section is of a reasonable sizeif uint64(len(header.Extra)) > params.MaximumExtraDataSize {return fmt.Errorf(\\\"extra-data too long: %d > %d\\\", len(header.Extra), params.MaximumExtraDataSize)}// Verify the header\\\'s timestampif !uncle {if header.Time > uint64(unixNow+allowedFutureBlockTimeSeconds) {return consensus.ErrFutureBlock}}if header.Time <= parent.Time {return errOlderBlockTime}// Verify the block\\\'s difficulty based on its timestamp and parent\\\'s difficultyexpected := ethash.CalcDifficulty(chain, header.Time, parent)if expected.Cmp(header.Difficulty) != 0 {return fmt.Errorf(\\\"invalid difficulty: have %v, want %v\\\", header.Difficulty, expected)}// Verify that the gas limit is <= 2^63-1cap := uint64(0x7fffffffffffffff)if header.GasLimit > cap {return fmt.Errorf(\\\"invalid gasLimit: have %v, max %v\\\", header.GasLimit, cap)}// Verify that the gasUsed is <= gasLimitif header.GasUsed > header.GasLimit {return fmt.Errorf(\\\"invalid gasUsed: have %d, gasLimit %d\\\", header.GasUsed, header.GasLimit)}// Verify that the gas limit remains within allowed boundsdiff := int64(parent.GasLimit) - int64(header.GasLimit)if diff < 0 {diff *= -1}limit := parent.GasLimit / params.GasLimitBoundDivisorif uint64(diff) >= limit || header.GasLimit < params.MinGasLimit {return fmt.Errorf(\\\"invalid gas limit: have %d, want %d += %d\\\", header.GasLimit, parent.GasLimit, limit)}// Verify that the block number is parent\\\'s +1if diff := new(big.Int).Sub(header.Number, parent.Number); diff.Cmp(big.NewInt(1)) != 0 {return consensus.ErrInvalidNumber}// Verify the engine specific seal securing the blockif seal {if err := ethash.verifySeal(chain, header, false); err != nil {return err}}// If all checks passed, validate any special fields for hard forksif err := misc.VerifyDAOHeaderExtraData(chain.Config(), header); err != nil {return err}if err := misc.VerifyForkHashes(chain.Config(), header, uncle); err != nil {return err}return nil}
verifySeal逻辑代码如下所示:
// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L490// verifySeal checks whether a block satisfies the PoW difficulty requirements,// either using the usual ethash cache for it, or alternatively using a full DAG// to make remote mining fast.func (ethash *Ethash) verifySeal(chain consensus.ChainHeaderReader, header *types.Header, fulldag bool) error {// If we\\\'re running a fake PoW, accept any seal as validif ethash.config.PowMode == ModeFake || ethash.config.PowMode == ModeFullFake {time.Sleep(ethash.fakeDelay)if ethash.fakeFail == header.Number.Uint64() {return errInvalidPoW}return nil}// If we\\\'re running a shared PoW, delegate verification to itif ethash.shared != nil {return ethash.shared.verifySeal(chain, header, fulldag)}// Ensure that we have a valid difficulty for the blockif header.Difficulty.Sign() <= 0 {return errInvalidDifficulty}// Recompute the digest and PoW valuesnumber := header.Number.Uint64()var (digest []byteresult []byte)// If fast-but-heavy PoW verification was requested, use an ethash datasetif fulldag {dataset := ethash.dataset(number, true)if dataset.generated() {digest, result = hashimotoFull(dataset.dataset, ethash.SealHash(header).Bytes(), header.Nonce.Uint64())// Datasets are unmapped in a finalizer. Ensure that the dataset stays alive// until after the call to hashimotoFull so it\\\'s not unmapped while being used.runtime.KeepAlive(dataset)} else {// Dataset not yet generated, don\\\'t hang, use a cache insteadfulldag = false}}// If slow-but-light PoW verification was requested (or DAG not yet ready), use an ethash cacheif !fulldag {cache := ethash.cache(number)size := datasetSize(number)if ethash.config.PowMode == ModeTest {size = 32 * 1024}digest, result = hashimotoLight(size, cache.cache, ethash.SealHash(header).Bytes(), header.Nonce.Uint64())// Caches are unmapped in a finalizer. Ensure that the cache stays alive// until after the call to hashimotoLight so it\\\'s not unmapped while being used.runtime.KeepAlive(cache)}// Verify the calculated values against the ones provided in the headerif !bytes.Equal(header.MixDigest[:], digest) {return errInvalidMixDigest}target := new(big.Int).Div(two256, header.Difficulty)if new(big.Int).SetBytes(result).Cmp(target) > 0 {return errInvalidPoW}return nil}
下面在来跟踪一下VerifyForkHashes,该函数用于验证符合网络硬分叉的块是否具有正确的哈希值,以避免客户端在不同的链上断开,相关逻辑代码如下所示:
// filedir:go-ethereum-1.10.2\\\\consensus\\\\misc\\\\forks.go L26// VerifyForkHashes verifies that blocks conforming to network hard-forks do have// the correct hashes, to avoid clients going off on different chains. This is an// optional feature.func VerifyForkHashes(config *params.ChainConfig, header *types.Header, uncle bool) error {// We don\\\'t care about unclesif uncle {return nil}// If the homestead reprice hash is set, validate itif config.EIP150Block != nil && config.EIP150Block.Cmp(header.Number) == 0 {if config.EIP150Hash != (common.Hash{}) && config.EIP150Hash != header.Hash() {return fmt.Errorf(\\\"homestead gas reprice fork: have 0x%x, want 0x%x\\\", header.Hash(), config.EIP150Hash)}}// All ok, returnreturn nil}
难度目标
因为上面的区块验证部分提及到了区块难度验证,所以我们这里简单的提一下区块难度目标,算法实现代码如下:
// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L304// CalcDifficulty is the difficulty adjustment algorithm. It returns// the difficulty that a new block should have when created at time// given the parent block\\\'s time and difficulty.func (ethash *Ethash) CalcDifficulty(chain consensus.ChainHeaderReader, time uint64, parent *types.Header) *big.Int {return CalcDifficulty(chain.Config(), time, parent)}// CalcDifficulty is the difficulty adjustment algorithm. It returns// the difficulty that a new block should have when created at time// given the parent block\\\'s time and difficulty.func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int {next := new(big.Int).Add(parent.Number, big1)switch {case config.IsMuirGlacier(next):return calcDifficultyEip2384(time, parent)case config.IsConstantinople(next):return calcDifficultyConstantinople(time, parent)case config.IsByzantium(next):return calcDifficultyByzantium(time, parent)case config.IsHomestead(next):return calcDifficultyHomestead(time, parent)default:return calcDifficultyFrontier(time, parent)}}
目前使用的IsHomestead版本,我们直接跟进calcDifficultyHomestead函数查看一番:
// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L404// calcDifficultyHomestead is the difficulty adjustment algorithm. It returns// the difficulty that a new block should have when created at time given the// parent block\\\'s time and difficulty. The calculation uses the Homestead rules.func calcDifficultyHomestead(time uint64, parent *types.Header) *big.Int {// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md// algorithm:// diff = (parent_diff +// (parent_diff / 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99))// ) + 2^(periodCount - 2)bigTime := new(big.Int).SetUint64(time)bigParentTime := new(big.Int).SetUint64(parent.Time)// holds intermediate values to make the algo easier to read & auditx := new(big.Int)y := new(big.Int)// 1 - (block_timestamp - parent_timestamp) // 10x.Sub(bigTime, bigParentTime)x.Div(x, big10)x.Sub(big1, x)// max(1 - (block_timestamp - parent_timestamp) // 10, -99)if x.Cmp(bigMinus99) < 0 {x.Set(bigMinus99)}// (parent_diff + parent_diff // 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99))y.Div(parent.Difficulty, params.DifficultyBoundDivisor)x.Mul(y, x)x.Add(parent.Difficulty, x)// minimum difficulty can ever be (before exponential factor)if x.Cmp(params.MinimumDifficulty) < 0 {x.Set(params.MinimumDifficulty)}// for the exponential factorperiodCount := new(big.Int).Add(parent.Number, big1)periodCount.Div(periodCount, expDiffPeriod)// the exponential factor, commonly referred to as \\\"the bomb\\\"// diff = diff + 2^(periodCount - 2)if periodCount.Cmp(big1) > 0 {y.Sub(periodCount, big2)y.Exp(big2, y, nil)x.Add(x, y)}return x}
计算过程如下:
diff = (parent_diff + (parent_diff / 2048 * max(1 - (block_timestamp - parent_timestamp)))) + 2^(periodCount - 2)
链条构建
在以太坊启动过程中会调用NewBlockChain来创建一个区块链,其调用流如下:
geth ——> makeFullNode ——> RegisterEthService ——> eth.New ——> core.NewBlockChain
在使用New来创建一个以太坊示例对象时会调用到SetupGenesisBlockWithOverride来加载创世区块并获取链基本配置,调用ReadDatabaseVersion来获取DB版本、调用NewBlockChain来构建以太坊区块链、调用NewTxPool创建一个交易池、调用NewOracle来进行价格预言等:
// filedir:go-ethereum-1.10.2\\\\eth\\\\backend.go L98// New creates a new Ethereum object (including the// initialisation of the common Ethereum object)func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {// Ensure configuration values are compatible and saneif config.SyncMode == downloader.LightSync {return nil, errors.New(\\\"can\\\'t run eth.Ethereum in light sync mode, use les.LightEthereum\\\")}if !config.SyncMode.IsValid() {return nil, fmt.Errorf(\\\"invalid sync mode %d\\\", config.SyncMode)}if config.Miner.GasPrice == nil || config.Miner.GasPrice.Cmp(common.Big0) <= 0 {log.Warn(\\\"Sanitizing invalid miner gas price\\\", \\\"provided\\\", config.Miner.GasPrice, \\\"updated\\\", ethconfig.Defaults.Miner.GasPrice)config.Miner.GasPrice = new(big.Int).Set(ethconfig.Defaults.Miner.GasPrice)}if config.NoPruning && config.TrieDirtyCache > 0 {if config.SnapshotCache > 0 {config.TrieCleanCache += config.TrieDirtyCache * 3 / 5config.SnapshotCache += config.TrieDirtyCache * 2 / 5} else {config.TrieCleanCache += config.TrieDirtyCache}config.TrieDirtyCache = 0}log.Info(\\\"Allocated trie memory caches\\\", \\\"clean\\\", common.StorageSize(config.TrieCleanCache)*1024*1024, \\\"dirty\\\", common.StorageSize(config.TrieDirtyCache)*1024*1024)// Transfer mining-related config to the ethash config.ethashConfig := config.EthashethashConfig.NotifyFull = config.Miner.NotifyFull// Assemble the Ethereum objectchainDb, err := stack.OpenDatabaseWithFreezer(\\\"chaindata\\\", config.DatabaseCache, config.DatabaseHandles, config.DatabaseFreezer, \\\"eth/db/chaindata/\\\", false)if err != nil {return nil, err}chainConfig, genesisHash, genesisErr := core.SetupGenesisBlockWithOverride(chainDb, config.Genesis, config.OverrideBerlin)if _, ok := genesisErr.(*params.ConfigCompatError); genesisErr != nil && !ok {return nil, genesisErr}log.Info(\\\"Initialised chain configuration\\\", \\\"config\\\", chainConfig)if err := pruner.RecoverPruning(stack.ResolvePath(\\\"\\\"), chainDb, stack.ResolvePath(config.TrieCleanCacheJournal)); err != nil {log.Error(\\\"Failed to recover state\\\", \\\"error\\\", err)}eth := &Ethereum{config: config,chainDb: chainDb,eventMux: stack.EventMux(),accountManager: stack.AccountManager(),engine: ethconfig.CreateConsensusEngine(stack, chainConfig, ðashConfig, config.Miner.Notify, config.Miner.Noverify, chainDb),closeBloomHandler: make(chan struct{}),networkID: config.NetworkId,gasPrice: config.Miner.GasPrice,etherbase: config.Miner.Etherbase,bloomRequests: make(chan chan *bloombits.Retrieval),bloomIndexer: core.NewBloomIndexer(chainDb, params.BloomBitsBlocks, params.BloomConfirms),p2pServer: stack.Server(),}bcVersion := rawdb.ReadDatabaseVersion(chainDb)var dbVer = \\\"<nil>\\\"if bcVersion != nil {dbVer = fmt.Sprintf(\\\"%d\\\", *bcVersion)}log.Info(\\\"Initialising Ethereum protocol\\\", \\\"network\\\", config.NetworkId, \\\"dbversion\\\", dbVer)if !config.SkipBcVersionCheck {if bcVersion != nil && *bcVersion > core.BlockChainVersion {return nil, fmt.Errorf(\\\"database version is v%d, Geth %s only supports v%d\\\", *bcVersion, params.VersionWithMeta, core.BlockChainVersion)} else if bcVersion == nil || *bcVersion < core.BlockChainVersion {log.Warn(\\\"Upgrade blockchain database version\\\", \\\"from\\\", dbVer, \\\"to\\\", core.BlockChainVersion)rawdb.WriteDatabaseVersion(chainDb, core.BlockChainVersion)}}var (vmConfig = vm.Config{EnablePreimageRecording: config.EnablePreimageRecording,EWASMInterpreter: config.EWASMInterpreter,EVMInterpreter: config.EVMInterpreter,}cacheConfig = &core.CacheConfig{TrieCleanLimit: config.TrieCleanCache,TrieCleanJournal: stack.ResolvePath(config.TrieCleanCacheJournal),TrieCleanRejournal: config.TrieCleanCacheRejournal,TrieCleanNoPrefetch: config.NoPrefetch,TrieDirtyLimit: config.TrieDirtyCache,TrieDirtyDisabled: config.NoPruning,TrieTimeLimit: config.TrieTimeout,SnapshotLimit: config.SnapshotCache,Preimages: config.Preimages,})eth.blockchain, err = core.NewBlockChain(chainDb, cacheConfig, chainConfig, eth.engine, vmConfig, eth.shouldPreserve, &config.TxLookupLimit)if err != nil {return nil, err}// Rewind the chain in case of an incompatible config upgrade.if compat, ok := genesisErr.(*params.ConfigCompatError); ok {log.Warn(\\\"Rewinding chain to upgrade configuration\\\", \\\"err\\\", compat)eth.blockchain.SetHead(compat.RewindTo)rawdb.WriteChainConfig(chainDb, genesisHash, chainConfig)}eth.bloomIndexer.Start(eth.blockchain)if config.TxPool.Journal != \\\"\\\" {config.TxPool.Journal = stack.ResolvePath(config.TxPool.Journal)}eth.txPool = core.NewTxPool(config.TxPool, chainConfig, eth.blockchain)// Permit the downloader to use the trie cache allowance during fast synccacheLimit := cacheConfig.TrieCleanLimit + cacheConfig.TrieDirtyLimit + cacheConfig.SnapshotLimitcheckpoint := config.Checkpointif checkpoint == nil {checkpoint = params.TrustedCheckpoints[genesisHash]}if eth.handler, err = newHandler(&handlerConfig{Database: chainDb,Chain: eth.blockchain,TxPool: eth.txPool,Network: config.NetworkId,Sync: config.SyncMode,BloomCache: uint64(cacheLimit),EventMux: eth.eventMux,Checkpoint: checkpoint,Whitelist: config.Whitelist,}); err != nil {return nil, err}eth.miner = miner.New(eth, &config.Miner, chainConfig, eth.EventMux(), eth.engine, eth.isLocalBlock)eth.miner.SetExtra(makeExtraData(config.Miner.ExtraData))eth.APIBackend = &EthAPIBackend{stack.Config().ExtRPCEnabled(), stack.Config().AllowUnprotectedTxs, eth, nil}if eth.APIBackend.allowUnprotectedTxs {log.Info(\\\"Unprotected transactions allowed\\\")}gpoParams := config.GPOif gpoParams.Default == nil {gpoParams.Default = config.Miner.GasPrice}eth.APIBackend.gpo = gasprice.NewOracle(eth.APIBackend, gpoParams)eth.ethDialCandidates, err = setupDiscovery(eth.config.EthDiscoveryURLs)if err != nil {return nil, err}eth.snapDialCandidates, err = setupDiscovery(eth.config.SnapDiscoveryURLs)if err != nil {return nil, err}// Start the RPC serviceeth.netRPCService = ethapi.NewPublicNetAPI(eth.p2pServer, config.NetworkId)// Register the backend on the nodestack.RegisterAPIs(eth.APIs())stack.RegisterProtocols(eth.Protocols())stack.RegisterLifecycle(eth)// Check for unclean shutdownif uncleanShutdowns, discards, err := rawdb.PushUncleanShutdownMarker(chainDb); err != nil {log.Error(\\\"Could not update unclean-shutdown-marker list\\\", \\\"error\\\", err)} else {if discards > 0 {log.Warn(\\\"Old unclean shutdowns found\\\", \\\"count\\\", discards)}for _, tstamp := range uncleanShutdowns {t := time.Unix(int64(tstamp), 0)log.Warn(\\\"Unclean shutdown detected\\\", \\\"booted\\\", t,\\\"age\\\", common.PrettyAge(t))}}return eth, nil}
NewBlockChain函数通过使用数据库中可用的信息返回完全初始化的块链,其主要做了以下几件事情:
-
创建各种lru缓存(最近最少使用的算法)
-
初始化triegc(用于垃圾回收的区块number对应的优先级队列)、stateCache、NewBlockValidator()、NewStateProcessor()、NewStateProcessor
-
NewHeaderChain()初始化区块头部链
-
bc.genesisBlock = bc.GetBlockByNumber(0) 获取创世区块
-
bc.loadLastState() 加载最新的状态数据
-
检查本地区块链上是否有bad block,如果有调用bc.SetHead回到硬分叉之前的区块头
-
go bc.update()定时处理future block
// filedir:go-ethereum-1.10.2\\\\core\\\\blockchain.go L214// NewBlockChain returns a fully initialised block chain using information// available in the database. It initialises the default Ethereum Validator and// Processor.func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *params.ChainConfig, engine consensus.Engine, vmConfig vm.Config, shouldPreserve func(block *types.Block) bool, txLookupLimit *uint64) (*BlockChain, error) {if cacheConfig == nil {cacheConfig = defaultCacheConfig}bodyCache, _ := lru.New(bodyCacheLimit)bodyRLPCache, _ := lru.New(bodyCacheLimit)receiptsCache, _ := lru.New(receiptsCacheLimit)blockCache, _ := lru.New(blockCacheLimit)txLookupCache, _ := lru.New(txLookupCacheLimit)futureBlocks, _ := lru.New(maxFutureBlocks)bc := &BlockChain{chainConfig: chainConfig,cacheConfig: cacheConfig,db: db,triegc: prque.New(nil),stateCache: state.NewDatabaseWithConfig(db, &trie.Config{Cache: cacheConfig.TrieCleanLimit,Journal: cacheConfig.TrieCleanJournal,Preimages: cacheConfig.Preimages,}),quit: make(chan struct{}),shouldPreserve: shouldPreserve,bodyCache: bodyCache,bodyRLPCache: bodyRLPCache,receiptsCache: receiptsCache,blockCache: blockCache,txLookupCache: txLookupCache,futureBlocks: futureBlocks,engine: engine,vmConfig: vmConfig,}bc.validator = NewBlockValidator(chainConfig, bc, engine)bc.prefetcher = newStatePrefetcher(chainConfig, bc, engine)bc.processor = NewStateProcessor(chainConfig, bc, engine)var err errorbc.hc, err = NewHeaderChain(db, chainConfig, engine, bc.insertStopped)if err != nil {return nil, err}bc.genesisBlock = bc.GetBlockByNumber(0)if bc.genesisBlock == nil {return nil, ErrNoGenesis}var nilBlock *types.Blockbc.currentBlock.Store(nilBlock)bc.currentFastBlock.Store(nilBlock)// Initialize the chain with ancient data if it isn\\\'t empty.var txIndexBlock uint64if bc.empty() {rawdb.InitDatabaseFromFreezer(bc.db)// If ancient database is not empty, reconstruct all missing// indices in the background.frozen, _ := bc.db.Ancients()if frozen > 0 {txIndexBlock = frozen}}if err := bc.loadLastState(); err != nil { //加载最新的状态return nil, err}// Make sure the state associated with the block is availablehead := bc.CurrentBlock()if _, err := state.New(head.Root(), bc.stateCache, bc.snaps); err != nil {// Head state is missing, before the state recovery, find out the// disk layer point of snapshot(if it\\\'s enabled). Make sure the// rewound point is lower than disk layer.var diskRoot common.Hashif bc.cacheConfig.SnapshotLimit > 0 {diskRoot = rawdb.ReadSnapshotRoot(bc.db)}if diskRoot != (common.Hash{}) {log.Warn(\\\"Head state missing, repairing\\\", \\\"number\\\", head.Number(), \\\"hash\\\", head.Hash(), \\\"snaproot\\\", diskRoot)snapDisk, err := bc.SetHeadBeyondRoot(head.NumberU64(), diskRoot)if err != nil {return nil, err}// Chain rewound, persist old snapshot number to indicate recovery procedureif snapDisk != 0 {rawdb.WriteSnapshotRecoveryNumber(bc.db, snapDisk)}} else {log.Warn(\\\"Head state missing, repairing\\\", \\\"number\\\", head.Number(), \\\"hash\\\", head.Hash())if err := bc.SetHead(head.NumberU64()); err != nil {return nil, err}}}// Ensure that a previous crash in SetHead doesn\\\'t leave extra ancientsif frozen, err := bc.db.Ancients(); err == nil && frozen > 0 {var (needRewind boollow uint64)// The head full block may be rolled back to a very low height due to// blockchain repair. If the head full block is even lower than the ancient// chain, truncate the ancient store.fullBlock := bc.CurrentBlock()if fullBlock != nil && fullBlock.Hash() != bc.genesisBlock.Hash() && fullBlock.NumberU64() < frozen-1 {needRewind = truelow = fullBlock.NumberU64()}// In fast sync, it may happen that ancient data has been written to the// ancient store, but the LastFastBlock has not been updated, truncate the// extra data here.fastBlock := bc.CurrentFastBlock()if fastBlock != nil && fastBlock.NumberU64() < frozen-1 {needRewind = trueif fastBlock.NumberU64() < low || low == 0 {low = fastBlock.NumberU64()}}if needRewind {log.Error(\\\"Truncating ancient chain\\\", \\\"from\\\", bc.CurrentHeader().Number.Uint64(), \\\"to\\\", low)if err := bc.SetHead(low); err != nil {return nil, err}}}// The first thing the node will do is reconstruct the verification data for// the head block (ethash cache or clique voting snapshot). Might as well do// it in advance.bc.engine.VerifyHeader(bc, bc.CurrentHeader(), true)// Check the current state of the block hashes and make sure that we do not have any of the bad blocks in our chainfor hash := range BadHashes {if header := bc.GetHeaderByHash(hash); header != nil {// get the canonical block corresponding to the offending header\\\'s numberheaderByNumber := bc.GetHeaderByNumber(header.Number.Uint64())// make sure the headerByNumber (if present) is in our current canonical chainif headerByNumber != nil && headerByNumber.Hash() == header.Hash() {log.Error(\\\"Found bad hash, rewinding chain\\\", \\\"number\\\", header.Number, \\\"hash\\\", header.ParentHash)if err := bc.SetHead(header.Number.Uint64() - 1); err != nil {return nil, err}log.Error(\\\"Chain rewind was successful, resuming normal operation\\\")}}}// Load any existing snapshot, regenerating it if loading failedif bc.cacheConfig.SnapshotLimit > 0 {// If the chain was rewound past the snapshot persistent layer (causing// a recovery block number to be persisted to disk), check if we\\\'re still// in recovery mode and in that case, don\\\'t invalidate the snapshot on a// head mismatch.var recover boolhead := bc.CurrentBlock()if layer := rawdb.ReadSnapshotRecoveryNumber(bc.db); layer != nil && *layer > head.NumberU64() {log.Warn(\\\"Enabling snapshot recovery\\\", \\\"chainhead\\\", head.NumberU64(), \\\"diskbase\\\", *layer)recover = true}bc.snaps, _ = snapshot.New(bc.db, bc.stateCache.TrieDB(), bc.cacheConfig.SnapshotLimit, head.Root(), !bc.cacheConfig.SnapshotWait, true, recover)}// Take ownership of this particular statego bc.update()if txLookupLimit != nil {bc.txLookupLimit = *txLookupLimitbc.wg.Add(1)go bc.maintainTxIndex(txIndexBlock)}// If periodic cache journal is required, spin it up.if bc.cacheConfig.TrieCleanRejournal > 0 {if bc.cacheConfig.TrieCleanRejournal < time.Minute {log.Warn(\\\"Sanitizing invalid trie cache journal time\\\", \\\"provided\\\", bc.cacheConfig.TrieCleanRejournal, \\\"updated\\\", time.Minute)bc.cacheConfig.TrieCleanRejournal = time.Minute}triedb := bc.stateCache.TrieDB()bc.wg.Add(1)go func() {defer bc.wg.Done()triedb.SaveCachePeriodically(bc.cacheConfig.TrieCleanJournal, bc.cacheConfig.TrieCleanRejournal, bc.quit)}()}return bc, nil}
这里的\\”bc.genesisBlock = bc.GetBlockByNumber(0)\\”用于检索创世区块是否存在,如果不存在则直接
bc.genesisBlock = bc.GetBlockByNumber(0)if bc.genesisBlock == nil {return nil, ErrNoGenesis}
之后判断区块数据是否为空,如果为空则调用InitDatabaseFromFreezer根据先前冻结的区块信息进行初始化一次,之后重置txIndexBlock:
if bc.empty() {rawdb.InitDatabaseFromFreezer(bc.db)// If ancient database is not empty, reconstruct all missing// indices in the background.frozen, _ := bc.db.Ancients()if frozen > 0 {txIndexBlock = frozen}}
这里的loadLastState用来加载最新区块链状态,首先获取最新的区块及其hash,之后检查DB是否为空或者损坏,如果是则调用bc.Reset重置区块链,之后确定区块头的可用性、获取最新的区块、通过日志进行记录:
// loadLastState loads the last known chain state from the database. This method// assumes that the chain manager mutex is held.func (bc *BlockChain) loadLastState() error {// Restore the last known head blockhead := rawdb.ReadHeadBlockHash(bc.db)if head == (common.Hash{}) {// Corrupt or empty database, init from scratchlog.Warn(\\\"Empty database, resetting chain\\\")return bc.Reset()}// Make sure the entire head block is availablecurrentBlock := bc.GetBlockByHash(head)if currentBlock == nil {// Corrupt or empty database, init from scratchlog.Warn(\\\"Head block missing, resetting chain\\\", \\\"hash\\\", head)return bc.Reset()}// Everything seems to be fine, set as the head blockbc.currentBlock.Store(currentBlock)headBlockGauge.Update(int64(currentBlock.NumberU64()))// Restore the last known head headercurrentHeader := currentBlock.Header()if head := rawdb.ReadHeadHeaderHash(bc.db); head != (common.Hash{}) {if header := bc.GetHeaderByHash(head); header != nil {currentHeader = header}}bc.hc.SetCurrentHeader(currentHeader)// Restore the last known head fast blockbc.currentFastBlock.Store(currentBlock)headFastBlockGauge.Update(int64(currentBlock.NumberU64()))if head := rawdb.ReadHeadFastBlockHash(bc.db); head != (common.Hash{}) {if block := bc.GetBlockByHash(head); block != nil {bc.currentFastBlock.Store(block)headFastBlockGauge.Update(int64(block.NumberU64()))}}// Issue a status log for the usercurrentFastBlock := bc.CurrentFastBlock()headerTd := bc.GetTd(currentHeader.Hash(), currentHeader.Number.Uint64())blockTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())fastTd := bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64())log.Info(\\\"Loaded most recent local header\\\", \\\"number\\\", currentHeader.Number, \\\"hash\\\", currentHeader.Hash(), \\\"td\\\", headerTd, \\\"age\\\", common.PrettyAge(time.Unix(int64(currentHeader.Time), 0)))log.Info(\\\"Loaded most recent local full block\\\", \\\"number\\\", currentBlock.Number(), \\\"hash\\\", currentBlock.Hash(), \\\"td\\\", blockTd, \\\"age\\\", common.PrettyAge(time.Unix(int64(currentBlock.Time()), 0)))log.Info(\\\"Loaded most recent local fast block\\\", \\\"number\\\", currentFastBlock.Number(), \\\"hash\\\", currentFastBlock.Hash(), \\\"td\\\", fastTd, \\\"age\\\", common.PrettyAge(time.Unix(int64(currentFastBlock.Time()), 0)))if pivot := rawdb.ReadLastPivotNumber(bc.db); pivot != nil {log.Info(\\\"Loaded last fast-sync pivot marker\\\", \\\"number\\\", *pivot)}return nil}
在bc.SetHead()方法中会调用bc.SetHeadBeyondRoot来实现具体的业务逻辑,在SetHeadBeyondRoot中的updateFn用于更新操作,delFn用于清除中间区块头所有的数据和缓存,在函数的末尾会清除缓存之后调用bc.loadLastState()重新加载本地最新状态:
// filedir:go-ethereum-1.10.2\\\\core\\\\blockchain.go L476// SetHead rewinds the local chain to a new head. Depending on whether the node// was fast synced or full synced and in which state, the method will try to// delete minimal data from disk whilst retaining chain consistency.func (bc *BlockChain) SetHead(head uint64) error {_, err := bc.SetHeadBeyondRoot(head, common.Hash{})return err}// SetHeadBeyondRoot rewinds the local chain to a new head with the extra condition// that the rewind must pass the specified state root. This method is meant to be// used when rewiding with snapshots enabled to ensure that we go back further than// persistent disk layer. Depending on whether the node was fast synced or full, and// in which state, the method will try to delete minimal data from disk whilst// retaining chain consistency.//// The method returns the block number where the requested root cap was found.func (bc *BlockChain) SetHeadBeyondRoot(head uint64, root common.Hash) (uint64, error) {bc.chainmu.Lock()defer bc.chainmu.Unlock()// Track the block number of the requested root hashvar rootNumber uint64 // (no root == always 0)// Retrieve the last pivot block to short circuit rollbacks beyond it and the// current freezer limit to start nuking id underflownpivot := rawdb.ReadLastPivotNumber(bc.db)frozen, _ := bc.db.Ancients()updateFn := func(db ethdb.KeyValueWriter, header *types.Header) (uint64, bool) {// Rewind the block chain, ensuring we don\\\'t end up with a stateless head// block. Note, depth equality is permitted to allow using SetHead as a// chain reparation mechanism without deleting any data!if currentBlock := bc.CurrentBlock(); currentBlock != nil && header.Number.Uint64() <= currentBlock.NumberU64() {newHeadBlock := bc.GetBlock(header.Hash(), header.Number.Uint64())if newHeadBlock == nil {log.Error(\\\"Gap in the chain, rewinding to genesis\\\", \\\"number\\\", header.Number, \\\"hash\\\", header.Hash())newHeadBlock = bc.genesisBlock} else {// Block exists, keep rewinding until we find one with state,// keeping rewinding until we exceed the optional threshold// root hashbeyondRoot := (root == common.Hash{}) // Flag whether we\\\'re beyond the requested root (no root, always true)for {// If a root threshold was requested but not yet crossed, checkif root != (common.Hash{}) && !beyondRoot && newHeadBlock.Root() == root {beyondRoot, rootNumber = true, newHeadBlock.NumberU64()}if _, err := state.New(newHeadBlock.Root(), bc.stateCache, bc.snaps); err != nil {log.Trace(\\\"Block state missing, rewinding further\\\", \\\"number\\\", newHeadBlock.NumberU64(), \\\"hash\\\", newHeadBlock.Hash())if pivot == nil || newHeadBlock.NumberU64() > *pivot {parent := bc.GetBlock(newHeadBlock.ParentHash(), newHeadBlock.NumberU64()-1)if parent != nil {newHeadBlock = parentcontinue}log.Error(\\\"Missing block in the middle, aiming genesis\\\", \\\"number\\\", newHeadBlock.NumberU64()-1, \\\"hash\\\", newHeadBlock.ParentHash())newHeadBlock = bc.genesisBlock} else {log.Trace(\\\"Rewind passed pivot, aiming genesis\\\", \\\"number\\\", newHeadBlock.NumberU64(), \\\"hash\\\", newHeadBlock.Hash(), \\\"pivot\\\", *pivot)newHeadBlock = bc.genesisBlock}}if beyondRoot || newHeadBlock.NumberU64() == 0 {log.Debug(\\\"Rewound to block with state\\\", \\\"number\\\", newHeadBlock.NumberU64(), \\\"hash\\\", newHeadBlock.Hash())break}log.Debug(\\\"Skipping block with threshold state\\\", \\\"number\\\", newHeadBlock.NumberU64(), \\\"hash\\\", newHeadBlock.Hash(), \\\"root\\\", newHeadBlock.Root())newHeadBlock = bc.GetBlock(newHeadBlock.ParentHash(), newHeadBlock.NumberU64()-1) // Keep rewinding}}rawdb.WriteHeadBlockHash(db, newHeadBlock.Hash())// Degrade the chain markers if they are explicitly reverted.// In theory we should update all in-memory markers in the// last step, however the direction of SetHead is from high// to low, so it\\\'s safe the update in-memory markers directly.bc.currentBlock.Store(newHeadBlock)headBlockGauge.Update(int64(newHeadBlock.NumberU64()))}// Rewind the fast block in a simpleton way to the target headif currentFastBlock := bc.CurrentFastBlock(); currentFastBlock != nil && header.Number.Uint64() < currentFastBlock.NumberU64() {newHeadFastBlock := bc.GetBlock(header.Hash(), header.Number.Uint64())// If either blocks reached nil, reset to the genesis stateif newHeadFastBlock == nil {newHeadFastBlock = bc.genesisBlock}rawdb.WriteHeadFastBlockHash(db, newHeadFastBlock.Hash())// Degrade the chain markers if they are explicitly reverted.// In theory we should update all in-memory markers in the// last step, however the direction of SetHead is from high// to low, so it\\\'s safe the update in-memory markers directly.bc.currentFastBlock.Store(newHeadFastBlock)headFastBlockGauge.Update(int64(newHeadFastBlock.NumberU64()))}head := bc.CurrentBlock().NumberU64()// If setHead underflown the freezer threshold and the block processing// intent afterwards is full block importing, delete the chain segment// between the stateful-block and the sethead target.var wipe boolif head+1 < frozen {wipe = pivot == nil || head >= *pivot}return head, wipe // Only force wipe if full synced}// Rewind the header chain, deleting all block bodies until thendelFn := func(db ethdb.KeyValueWriter, hash common.Hash, num uint64) {// Ignore the error here since light client won\\\'t hit this pathfrozen, _ := bc.db.Ancients()if num+1 <= frozen {// Truncate all relative data(header, total difficulty, body, receipt// and canonical hash) from ancient store.if err := bc.db.TruncateAncients(num); err != nil {log.Crit(\\\"Failed to truncate ancient data\\\", \\\"number\\\", num, \\\"err\\\", err)}// Remove the hash <-> number mapping from the active store.rawdb.DeleteHeaderNumber(db, hash)} else {// Remove relative body and receipts from the active store.// The header, total difficulty and canonical hash will be// removed in the hc.SetHead function.rawdb.DeleteBody(db, hash, num)rawdb.DeleteReceipts(db, hash, num)}// Todo(rjl493456442) txlookup, bloombits, etc}// If SetHead was only called as a chain reparation method, try to skip// touching the header chain altogether, unless the freezer is brokenif block := bc.CurrentBlock(); block.NumberU64() == head {if target, force := updateFn(bc.db, block.Header()); force {bc.hc.SetHead(target, updateFn, delFn)}} else {// Rewind the chain to the requested head and keep going backwards until a// block with a state is found or fast sync pivot is passedlog.Warn(\\\"Rewinding blockchain\\\", \\\"target\\\", head)bc.hc.SetHead(head, updateFn, delFn)}// Clear out any stale content from the cachesbc.bodyCache.Purge()bc.bodyRLPCache.Purge()bc.receiptsCache.Purge()bc.blockCache.Purge()bc.txLookupCache.Purge()bc.futureBlocks.Purge()return rootNumber, bc.loadLastState()}
loadLastState如下所示,首先到最新的区块头,然后设置currentBlock、currentHeader和currentFastBlock,然后获取到最新区块以及它的hash:
// loadLastState loads the last known chain state from the database. This method// assumes that the chain manager mutex is held.func (bc *BlockChain) loadLastState() error {// Restore the last known head blockhead := rawdb.ReadHeadBlockHash(bc.db)if head == (common.Hash{}) {// Corrupt or empty database, init from scratchlog.Warn(\\\"Empty database, resetting chain\\\")return bc.Reset()}// Make sure the entire head block is availablecurrentBlock := bc.GetBlockByHash(head)if currentBlock == nil {// Corrupt or empty database, init from scratchlog.Warn(\\\"Head block missing, resetting chain\\\", \\\"hash\\\", head)return bc.Reset()}// Everything seems to be fine, set as the head blockbc.currentBlock.Store(currentBlock)headBlockGauge.Update(int64(currentBlock.NumberU64()))// Restore the last known head headercurrentHeader := currentBlock.Header()if head := rawdb.ReadHeadHeaderHash(bc.db); head != (common.Hash{}) {if header := bc.GetHeaderByHash(head); header != nil {currentHeader = header}}bc.hc.SetCurrentHeader(currentHeader)// Restore the last known head fast blockbc.currentFastBlock.Store(currentBlock)headFastBlockGauge.Update(int64(currentBlock.NumberU64()))if head := rawdb.ReadHeadFastBlockHash(bc.db); head != (common.Hash{}) {if block := bc.GetBlockByHash(head); block != nil {bc.currentFastBlock.Store(block)headFastBlockGauge.Update(int64(block.NumberU64()))}}// Issue a status log for the usercurrentFastBlock := bc.CurrentFastBlock()headerTd := bc.GetTd(currentHeader.Hash(), currentHeader.Number.Uint64())blockTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())fastTd := bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64())log.Info(\\\"Loaded most recent local header\\\", \\\"number\\\", currentHeader.Number, \\\"hash\\\", currentHeader.Hash(), \\\"td\\\", headerTd, \\\"age\\\", common.PrettyAge(time.Unix(int64(currentHeader.Time), 0)))log.Info(\\\"Loaded most recent local full block\\\", \\\"number\\\", currentBlock.Number(), \\\"hash\\\", currentBlock.Hash(), \\\"td\\\", blockTd, \\\"age\\\", common.PrettyAge(time.Unix(int64(currentBlock.Time()), 0)))log.Info(\\\"Loaded most recent local fast block\\\", \\\"number\\\", currentFastBlock.Number(), \\\"hash\\\", currentFastBlock.Hash(), \\\"td\\\", fastTd, \\\"age\\\", common.PrettyAge(time.Unix(int64(currentFastBlock.Time()), 0)))if pivot := rawdb.ReadLastPivotNumber(bc.db); pivot != nil {log.Info(\\\"Loaded last fast-sync pivot marker\\\", \\\"number\\\", *pivot)}return nil}
之后节点开始重建头块的验证数据(ethash缓存或投票快照),在这里会通过\\”go bc.update()\\”定时处理future block:
func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *params.ChainConfig, engine consensus.Engine, vmConfig vm.Config, shouldPreserve func(block *types.Block) bool, txLookupLimit *uint64) (*BlockChain, error) {if cacheConfig == nil {......// Load any existing snapshot, regenerating it if loading failedif bc.cacheConfig.SnapshotLimit > 0 {// If the chain was rewound past the snapshot persistent layer (causing// a recovery block number to be persisted to disk), check if we\\\'re still// in recovery mode and in that case, don\\\'t invalidate the snapshot on a// head mismatch.var recover boolhead := bc.CurrentBlock()if layer := rawdb.ReadSnapshotRecoveryNumber(bc.db); layer != nil && *layer > head.NumberU64() {log.Warn(\\\"Enabling snapshot recovery\\\", \\\"chainhead\\\", head.NumberU64(), \\\"diskbase\\\", *layer)recover = true}bc.snaps, _ = snapshot.New(bc.db, bc.stateCache.TrieDB(), bc.cacheConfig.SnapshotLimit, head.Root(), !bc.cacheConfig.SnapshotWait, true, recover)}// Take ownership of this particular statego bc.update()if txLookupLimit != nil {bc.txLookupLimit = *txLookupLimitbc.wg.Add(1)go bc.maintainTxIndex(txIndexBlock)}// If periodic cache journal is required, spin it up.if bc.cacheConfig.TrieCleanRejournal > 0 {if bc.cacheConfig.TrieCleanRejournal < time.Minute {log.Warn(\\\"Sanitizing invalid trie cache journal time\\\", \\\"provided\\\", bc.cacheConfig.TrieCleanRejournal, \\\"updated\\\", time.Minute)bc.cacheConfig.TrieCleanRejournal = time.Minute}triedb := bc.stateCache.TrieDB()bc.wg.Add(1)go func() {defer bc.wg.Done()triedb.SaveCachePeriodically(bc.cacheConfig.TrieCleanJournal, bc.cacheConfig.TrieCleanRejournal, bc.quit)}()}return bc, nil}
链条重置
在blockchain中提供了一个rest函数,该函数可以用于重置区块链,将其重置到传世区块,具体实现代码如下:
// Reset purges the entire blockchain, restoring it to its genesis state.func (bc *BlockChain) Reset() error {return bc.ResetWithGenesisBlock(bc.genesisBlock)}// ResetWithGenesisBlock purges the entire blockchain, restoring it to the// specified genesis state.func (bc *BlockChain) ResetWithGenesisBlock(genesis *types.Block) error {// Dump the entire block chain and purge the cachesif err := bc.SetHead(0); err != nil {return err}bc.chainmu.Lock()defer bc.chainmu.Unlock()// Prepare the genesis block and reinitialise the chainbatch := bc.db.NewBatch()rawdb.WriteTd(batch, genesis.Hash(), genesis.NumberU64(), genesis.Difficulty())rawdb.WriteBlock(batch, genesis)if err := batch.Write(); err != nil {log.Crit(\\\"Failed to write genesis block\\\", \\\"err\\\", err)}bc.writeHeadBlock(genesis)// Last update all in-memory chain markersbc.genesisBlock = genesisbc.currentBlock.Store(bc.genesisBlock)headBlockGauge.Update(int64(bc.genesisBlock.NumberU64()))bc.hc.SetGenesis(bc.genesisBlock.Header())bc.hc.SetCurrentHeader(bc.genesisBlock.Header())bc.currentFastBlock.Store(bc.genesisBlock)headFastBlockGauge.Update(int64(bc.genesisBlock.NumberU64()))return nil}
区块插入
InsertChain主要用于将一组区块插入数据库和规范链,在这里会通过for循环来逐个检查区块的区块号以及hash链是否连续,之后进行区块的导入操作,函数的末尾使用了go语言的waitGroup.Add(1)来增加一个需要等待的goroutine、waitGroup.Done()来减去一个需要等待的goroutine,在Done()操作之前,具有waitGroup.wait()的函数会等待某处的waitGroup.Done()执行完后再执行,这里waitGroup.wait()在blockchain.Stop()函数里,意味着如果在插入区块的时候,突然有人执行Stop()函数,那么必须要等insertChain()执行完:
// filedir:go-ethereum-1.10.2\\\\core\\\\blockchain.go L1654// InsertChain attempts to insert the given batch of blocks in to the canonical// chain or, otherwise, create a fork. If an error is returned it will return// the index number of the failing block as well an error describing what went// wrong.//// After insertion is done, all accumulated events will be fired.func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {// Sanity check that we have something meaningful to importif len(chain) == 0 {return 0, nil}bc.blockProcFeed.Send(true)defer bc.blockProcFeed.Send(false)// Remove already known canon-blocksvar (block, prev *types.Block)// Do a sanity check that the provided chain is actually ordered and linkedfor i := 1; i < len(chain); i++ {block = chain[i]prev = chain[i-1]if block.NumberU64() != prev.NumberU64()+1 || block.ParentHash() != prev.Hash() {// Chain broke ancestry, log a message (programming error) and skip insertionlog.Error(\\\"Non contiguous block insert\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"parent\\\", block.ParentHash(), \\\"prevnumber\\\", prev.Number(), \\\"prevhash\\\", prev.Hash())return 0, fmt.Errorf(\\\"non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])\\\", i-1, prev.NumberU64(),prev.Hash().Bytes()[:4], i, block.NumberU64(), block.Hash().Bytes()[:4], block.ParentHash().Bytes()[:4])}}// Pre-checks passed, start the full block importsbc.wg.Add(1)bc.chainmu.Lock()n, err := bc.insertChain(chain, true)bc.chainmu.Unlock()bc.wg.Done()return n, err}
insertChain函数为区块插入的具体实现,其代码如下所示:
// filedir:go-ethereum-1.10.2\\\\core\\\\blockchain.go L1696// insertChain is the internal implementation of InsertChain, which assumes that// 1) chains are contiguous, and 2) The chain mutex is held.//// This method is split out so that import batches that require re-injecting// historical blocks can do so without releasing the lock, which could lead to// racey behaviour. If a sidechain import is in progress, and the historic state// is imported, but then new canon-head is added before the actual sidechain// completes, then the historic state could be pruned againfunc (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, error) {// If the chain is terminating, don\\\'t even bother starting upif atomic.LoadInt32(&bc.procInterrupt) == 1 {return 0, nil}// Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss)senderCacher.recoverFromBlocks(types.MakeSigner(bc.chainConfig, chain[0].Number()), chain)var (stats = insertStats{startTime: mclock.Now()}lastCanon *types.Block)// Fire a single chain head event if we\\\'ve progressed the chaindefer func() {if lastCanon != nil && bc.CurrentBlock().Hash() == lastCanon.Hash() {bc.chainHeadFeed.Send(ChainHeadEvent{lastCanon})}}()// Start the parallel header verifierheaders := make([]*types.Header, len(chain))seals := make([]bool, len(chain))for i, block := range chain {headers[i] = block.Header()seals[i] = verifySeals}abort, results := bc.engine.VerifyHeaders(bc, headers, seals)defer close(abort)// Peek the error for the first block to decide the directing import logicit := newInsertIterator(chain, results, bc.validator)block, err := it.next()// Left-trim all the known blocksif err == ErrKnownBlock {// First block (and state) is known// 1. We did a roll-back, and should now do a re-import// 2. The block is stored as a sidechain, and is lying about it\\\'s stateroot, and passes a stateroot// from the canonical chain, which has not been verified.// Skip all known blocks that are behind usvar (current = bc.CurrentBlock()localTd = bc.GetTd(current.Hash(), current.NumberU64())externTd = bc.GetTd(block.ParentHash(), block.NumberU64()-1) // The first block can\\\'t be nil)for block != nil && err == ErrKnownBlock {externTd = new(big.Int).Add(externTd, block.Difficulty())if localTd.Cmp(externTd) < 0 {break}log.Debug(\\\"Ignoring already known block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())stats.ignored++block, err = it.next()}// The remaining blocks are still known blocks, the only scenario here is:// During the fast sync, the pivot point is already submitted but rollback// happens. Then node resets the head full block to a lower height via `rollback`// and leaves a few known blocks in the database.//// When node runs a fast sync again, it can re-import a batch of known blocks via// `insertChain` while a part of them have higher total difficulty than current// head full block(new pivot point).for block != nil && err == ErrKnownBlock {log.Debug(\\\"Writing previously known block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())if err := bc.writeKnownBlock(block); err != nil {return it.index, err}lastCanon = blockblock, err = it.next()}// Falls through to the block import}switch {// First block is pruned, insert as sidechain and reorg only if TD grows enoughcase errors.Is(err, consensus.ErrPrunedAncestor):log.Debug(\\\"Pruned ancestor, inserting as sidechain\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())return bc.insertSideChain(block, it)// First block is future, shove it (and all children) to the future queue (unknown ancestor)case errors.Is(err, consensus.ErrFutureBlock) || (errors.Is(err, consensus.ErrUnknownAncestor) && bc.futureBlocks.Contains(it.first().ParentHash())):for block != nil && (it.index == 0 || errors.Is(err, consensus.ErrUnknownAncestor)) {log.Debug(\\\"Future block, postponing import\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())if err := bc.addFutureBlock(block); err != nil {return it.index, err}block, err = it.next()}stats.queued += it.processed()stats.ignored += it.remaining()// If there are any still remaining, mark as ignoredreturn it.index, err// Some other error occurred, abortcase err != nil:bc.futureBlocks.Remove(block.Hash())stats.ignored += len(it.chain)bc.reportBlock(block, nil, err)return it.index, err}// No validation errors for the first block (or chain prefix skipped)var activeState *state.StateDBdefer func() {// The chain importer is starting and stopping trie prefetchers. If a bad// block or other error is hit however, an early return may not properly// terminate the background threads. This defer ensures that we clean up// and dangling prefetcher, without defering each and holding on live refs.if activeState != nil {activeState.StopPrefetcher()}}()for ; block != nil && err == nil || err == ErrKnownBlock; block, err = it.next() {// If the chain is terminating, stop processing blocksif bc.insertStopped() {log.Debug(\\\"Abort during block processing\\\")break}// If the header is a banned one, straight out abortif BadHashes[block.Hash()] {bc.reportBlock(block, nil, ErrBlacklistedHash)return it.index, ErrBlacklistedHash}// If the block is known (in the middle of the chain), it\\\'s a special case for// Clique blocks where they can share state among each other, so importing an// older block might complete the state of the subsequent one. In this case,// just skip the block (we already validated it once fully (and crashed), since// its header and body was already in the database).if err == ErrKnownBlock {logger := log.Debugif bc.chainConfig.Clique == nil {logger = log.Warn}logger(\\\"Inserted known block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"uncles\\\", len(block.Uncles()), \\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(),\\\"root\\\", block.Root())// Special case. Commit the empty receipt slice if we meet the known// block in the middle. It can only happen in the clique chain. Whenever// we insert blocks via `insertSideChain`, we only commit `td`, `header`// and `body` if it\\\'s non-existent. Since we don\\\'t have receipts without// reexecution, so nothing to commit. But if the sidechain will be adpoted// as the canonical chain eventually, it needs to be reexecuted for missing// state, but if it\\\'s this special case here(skip reexecution) we will lose// the empty receipt entry.if len(block.Transactions()) == 0 {rawdb.WriteReceipts(bc.db, block.Hash(), block.NumberU64(), nil)} else {log.Error(\\\"Please file an issue, skip known block execution without receipt\\\",\\\"hash\\\", block.Hash(), \\\"number\\\", block.NumberU64())}if err := bc.writeKnownBlock(block); err != nil {return it.index, err}stats.processed++// We can assume that logs are empty here, since the only way for consecutive// Clique blocks to have the same state is if there are no transactions.lastCanon = blockcontinue}// Retrieve the parent block and it\\\'s state to execute on topstart := time.Now()parent := it.previous()if parent == nil {parent = bc.GetHeader(block.ParentHash(), block.NumberU64()-1)}statedb, err := state.New(parent.Root, bc.stateCache, bc.snaps)if err != nil {return it.index, err}// Enable prefetching to pull in trie node paths while processing transactionsstatedb.StartPrefetcher(\\\"chain\\\")activeState = statedb// If we have a followup block, run that against the current state to pre-cache// transactions and probabilistically some of the account/storage trie nodes.var followupInterrupt uint32if !bc.cacheConfig.TrieCleanNoPrefetch {if followup, err := it.peek(); followup != nil && err == nil {throwaway, _ := state.New(parent.Root, bc.stateCache, bc.snaps)go func(start time.Time, followup *types.Block, throwaway *state.StateDB, interrupt *uint32) {bc.prefetcher.Prefetch(followup, throwaway, bc.vmConfig, &followupInterrupt)blockPrefetchExecuteTimer.Update(time.Since(start))if atomic.LoadUint32(interrupt) == 1 {blockPrefetchInterruptMeter.Mark(1)}}(time.Now(), followup, throwaway, &followupInterrupt)}}// Process block using the parent state as reference pointsubstart := time.Now()receipts, logs, usedGas, err := bc.processor.Process(block, statedb, bc.vmConfig)if err != nil {bc.reportBlock(block, receipts, err)atomic.StoreUint32(&followupInterrupt, 1)return it.index, err}// Update the metrics touched during block processingaccountReadTimer.Update(statedb.AccountReads) // Account reads are complete, we can mark themstorageReadTimer.Update(statedb.StorageReads) // Storage reads are complete, we can mark themaccountUpdateTimer.Update(statedb.AccountUpdates) // Account updates are complete, we can mark themstorageUpdateTimer.Update(statedb.StorageUpdates) // Storage updates are complete, we can mark themsnapshotAccountReadTimer.Update(statedb.SnapshotAccountReads) // Account reads are complete, we can mark themsnapshotStorageReadTimer.Update(statedb.SnapshotStorageReads) // Storage reads are complete, we can mark themtriehash := statedb.AccountHashes + statedb.StorageHashes // Save to not double count in validationtrieproc := statedb.SnapshotAccountReads + statedb.AccountReads + statedb.AccountUpdatestrieproc += statedb.SnapshotStorageReads + statedb.StorageReads + statedb.StorageUpdatesblockExecutionTimer.Update(time.Since(substart) - trieproc - triehash)// Validate the state using the default validatorsubstart = time.Now()if err := bc.validator.ValidateState(block, statedb, receipts, usedGas); err != nil {bc.reportBlock(block, receipts, err)atomic.StoreUint32(&followupInterrupt, 1)return it.index, err}proctime := time.Since(start)// Update the metrics touched during block validationaccountHashTimer.Update(statedb.AccountHashes) // Account hashes are complete, we can mark themstorageHashTimer.Update(statedb.StorageHashes) // Storage hashes are complete, we can mark themblockValidationTimer.Update(time.Since(substart) - (statedb.AccountHashes + statedb.StorageHashes - triehash))// Write the block to the chain and get the status.substart = time.Now()status, err := bc.writeBlockWithState(block, receipts, logs, statedb, false)atomic.StoreUint32(&followupInterrupt, 1)if err != nil {return it.index, err}// Update the metrics touched during block commitaccountCommitTimer.Update(statedb.AccountCommits) // Account commits are complete, we can mark themstorageCommitTimer.Update(statedb.StorageCommits) // Storage commits are complete, we can mark themsnapshotCommitTimer.Update(statedb.SnapshotCommits) // Snapshot commits are complete, we can mark themblockWriteTimer.Update(time.Since(substart) - statedb.AccountCommits - statedb.StorageCommits - statedb.SnapshotCommits)blockInsertTimer.UpdateSince(start)switch status {case CanonStatTy:log.Debug(\\\"Inserted new block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"uncles\\\", len(block.Uncles()), \\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(),\\\"elapsed\\\", common.PrettyDuration(time.Since(start)),\\\"root\\\", block.Root())lastCanon = block// Only count canonical blocks for GC processing timebc.gcproc += proctimecase SideStatTy:log.Debug(\\\"Inserted forked block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"diff\\\", block.Difficulty(), \\\"elapsed\\\", common.PrettyDuration(time.Since(start)),\\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(), \\\"uncles\\\", len(block.Uncles()),\\\"root\\\", block.Root())default:// This in theory is impossible, but lets be nice to our future selves and leave// a log, instead of trying to track down blocks imports that don\\\'t emit logs.log.Warn(\\\"Inserted block with unknown status\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"diff\\\", block.Difficulty(), \\\"elapsed\\\", common.PrettyDuration(time.Since(start)),\\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(), \\\"uncles\\\", len(block.Uncles()),\\\"root\\\", block.Root())}stats.processed++stats.usedGas += usedGasdirty, _ := bc.stateCache.TrieDB().Size()stats.report(chain, it.index, dirty)}// Any blocks remaining here? The only ones we care about are the future onesif block != nil && errors.Is(err, consensus.ErrFutureBlock) {if err := bc.addFutureBlock(block); err != nil {return it.index, err}block, err = it.next()for ; block != nil && errors.Is(err, consensus.ErrUnknownAncestor); block, err = it.next() {if err := bc.addFutureBlock(block); err != nil {return it.index, err}stats.queued++}}stats.ignored += it.remaining()return it.index, err}
在上述代码中首先会检查当前链的状态是否正在终止,如果是则直接return:
// If the chain is terminating, don\\\'t even bother starting upif atomic.LoadInt32(&bc.procInterrupt) == 1 {return 0, nil}
之后验证第n个区块的header和body,这里对headers、seals进行一次切片处理,之后通过abort来传递退出通道命令,使用result来传递检查结果命令,这里的bc.engine.VerifyHeaders()是一个异步操作,所以结果返回到results通道里,这是go语言实现异步的一种方式:
// Start the parallel header verifierheaders := make([]*types.Header, len(chain))seals := make([]bool, len(chain))for i, block := range chain {headers[i] = block.Header()seals[i] = verifySeals}abort, results := bc.engine.VerifyHeaders(bc, headers, seals)defer close(abort)
之后调用newInsertIterator基于给定的块创建一个新的迭代器,假定这些块是一个连续的链:
// Peek the error for the first block to decide the directing import logicit := newInsertIterator(chain, results, bc.validator)block, err := it.next()
如果待插入的区块是一个已知区块,则直接忽视该区块并更新迭代器与新的block,之后再次进行校验,在这里剩余的块有可能仍然是已知的块,因为在快速同步期间,pivot point已经提交,但会发生回滚,然后node通过\\”rollback\\”将head-full块重置为较低的高度,并在数据库中保留一些已知块,当node再次运行快速同步时,它可以通过\\”insertChain\\”重新导入一批已知块,而其中一部分块的总难度高于当前head-full块(new pivot point)
// Left-trim all the known blocksif err == ErrKnownBlock {// First block (and state) is known// 1. We did a roll-back, and should now do a re-import// 2. The block is stored as a sidechain, and is lying about it\\\'s stateroot, and passes a stateroot// from the canonical chain, which has not been verified.// Skip all known blocks that are behind usvar (current = bc.CurrentBlock()localTd = bc.GetTd(current.Hash(), current.NumberU64())externTd = bc.GetTd(block.ParentHash(), block.NumberU64()-1) // The first block can\\\'t be nil)for block != nil && err == ErrKnownBlock {externTd = new(big.Int).Add(externTd, block.Difficulty())if localTd.Cmp(externTd) < 0 {break}log.Debug(\\\"Ignoring already known block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())stats.ignored++block, err = it.next()}// The remaining blocks are still known blocks, the only scenario here is:// During the fast sync, the pivot point is already submitted but rollback// happens. Then node resets the head full block to a lower height via `rollback`// and leaves a few known blocks in the database.//// When node runs a fast sync again, it can re-import a batch of known blocks via// `insertChain` while a part of them have higher total difficulty than current// head full block(new pivot point).for block != nil && err == ErrKnownBlock {log.Debug(\\\"Writing previously known block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())if err := bc.writeKnownBlock(block); err != nil {return it.index, err}lastCanon = blockblock, err = it.next()}// Falls through to the block import}
之后继续校验区块,如果区块的父区块已知但其状态不可用时,则作为侧链数据插入,如果区块是future block,将它(和所有子块)添加到future队列(未知祖先),当其他错误发生时则直接abort:
switch {// First block is pruned, insert as sidechain and reorg only if TD grows enoughcase errors.Is(err, consensus.ErrPrunedAncestor):log.Debug(\\\"Pruned ancestor, inserting as sidechain\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())return bc.insertSideChain(block, it)// First block is future, shove it (and all children) to the future queue (unknown ancestor)case errors.Is(err, consensus.ErrFutureBlock) || (errors.Is(err, consensus.ErrUnknownAncestor) && bc.futureBlocks.Contains(it.first().ParentHash())):for block != nil && (it.index == 0 || errors.Is(err, consensus.ErrUnknownAncestor)) {log.Debug(\\\"Future block, postponing import\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash())if err := bc.addFutureBlock(block); err != nil {return it.index, err}block, err = it.next()}stats.queued += it.processed()stats.ignored += it.remaining()// If there are any still remaining, mark as ignoredreturn it.index, err// Some other error occurred, abortcase err != nil:bc.futureBlocks.Remove(block.Hash())stats.ignored += len(it.chain)bc.reportBlock(block, nil, err)return it.index, err}
之后通过一个for循环来遍历插区块,如果此时链正在终止,则停止处理块,如果是BadBlock则终止,如果块是已知的(在链的中间),则直接跳过,之后检索父块及其最新执行状态,如果我们有一个followup块,那么在当前状态下运行它来预缓存事务:
for ; block != nil && err == nil || err == ErrKnownBlock; block, err = it.next() {// If the chain is terminating, stop processing blocksif bc.insertStopped() {log.Debug(\\\"Abort during block processing\\\")break}// If the header is a banned one, straight out abortif BadHashes[block.Hash()] {bc.reportBlock(block, nil, ErrBlacklistedHash)return it.index, ErrBlacklistedHash}// If the block is known (in the middle of the chain), it\\\'s a special case for// Clique blocks where they can share state among each other, so importing an// older block might complete the state of the subsequent one. In this case,// just skip the block (we already validated it once fully (and crashed), since// its header and body was already in the database).if err == ErrKnownBlock {logger := log.Debugif bc.chainConfig.Clique == nil {logger = log.Warn}logger(\\\"Inserted known block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"uncles\\\", len(block.Uncles()), \\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(),\\\"root\\\", block.Root())// Special case. Commit the empty receipt slice if we meet the known// block in the middle. It can only happen in the clique chain. Whenever// we insert blocks via `insertSideChain`, we only commit `td`, `header`// and `body` if it\\\'s non-existent. Since we don\\\'t have receipts without// reexecution, so nothing to commit. But if the sidechain will be adpoted// as the canonical chain eventually, it needs to be reexecuted for missing// state, but if it\\\'s this special case here(skip reexecution) we will lose// the empty receipt entry.if len(block.Transactions()) == 0 {rawdb.WriteReceipts(bc.db, block.Hash(), block.NumberU64(), nil)} else {log.Error(\\\"Please file an issue, skip known block execution without receipt\\\",\\\"hash\\\", block.Hash(), \\\"number\\\", block.NumberU64())}if err := bc.writeKnownBlock(block); err != nil {return it.index, err}stats.processed++// We can assume that logs are empty here, since the only way for consecutive// Clique blocks to have the same state is if there are no transactions.lastCanon = blockcontinue}......}
如果上面验证均通过,则对待插入区块的交易状态进行验证,否则退出,在这里首先会从父区块读取状态,之后调用bc.processor.Process(block, state, bc.vmConfig)来执行交易,更新状态,之后对状态进行验证,看与header中的数据是否匹配
// Retrieve the parent block and it\\\'s state to execute on topstart := time.Now()parent := it.previous()if parent == nil {parent = bc.GetHeader(block.ParentHash(), block.uNmberU64()-1)}statedb, err := state.New(parent.Root, bc.stateCache, bc.snaps)if err != nil {return it.index, err}// Enable prefetching to pull in trie node paths while processing transactionsstatedb.StartPrefetcher(\\\"chain\\\")activeState = statedb// If we have a followup block, run that against the current state to pre-cache// transactions and probabilistically some of the account/storage trie nodes.var followupInterrupt uint32if !bc.cacheConfig.TrieCleanNoPrefetch {if followup, err := it.peek(); followup != nil && err == nil {throwaway, _ := state.New(parent.Root, bc.stateCache, bc.snaps)go func(start time.Time, followup *types.Block, throwaway *state.StateDB, interrupt *uint32) {bc.prefetcher.Prefetch(followup, throwaway, bc.vmConfig, &followupInterrupt)blockPrefetchExecuteTimer.Update(time.Since(start))if atomic.LoadUint32(interrupt) == 1 {blockPrefetchInterruptMeter.Mark(1)}}(time.Now(), followup, throwaway, &followupInterrupt)}}// Process block using the parent state as reference pointsubstart := time.Now()receipts, logs, usedGas, err := bc.processor.Process(block, statedb, bc.vmConfig)if err != nil {bc.reportBlock(block, receipts, err)atomic.StoreUint32(&followupInterrupt, 1)return it.index, err}// Update the metrics touched during block processingaccountReadTimer.Update(statedb.AccountReads) // Account reads are complete, we can mark themstorageReadTimer.Update(statedb.StorageReads) // Storage reads are complete, we can mark themaccountUpdateTimer.Update(statedb.AccountUpdates) // Account updates are complete, we can mark themstorageUpdateTimer.Update(statedb.StorageUpdates) // Storage updates are complete, we can mark themsnapshotAccountReadTimer.Update(statedb.SnapshotAccountReads) // Account reads are complete, we can mark themsnapshotStorageReadTimer.Update(statedb.SnapshotStorageReads) // Storage reads are complete, we can mark themtriehash := statedb.AccountHashes + statedb.StorageHashes // Save to not double count in validationtrieproc := statedb.SnapshotAccountReads + statedb.AccountReads + statedb.AccountUpdatestrieproc += statedb.SnapshotStorageReads + statedb.StorageReads + statedb.StorageUpdatesblockExecutionTimer.Update(time.Since(substart) - trieproc - triehash)// Validate the state using the default validatorsubstart = time.Now()if err := bc.validator.ValidateState(block, statedb, receipts, usedGas); err != nil {bc.reportBlock(block, receipts, err)atomic.StoreUint32(&followupInterrupt, 1)return it.index, err}proctime := time.Since(start)
Process()方法执行了Block里面包含的的所有交易,根据交易的过程和结果生成所有交易的收据和日志信息(fast模式下收据数据是同步过来的,full模式下是本地重现了交易并生成了收据数据
// filedir:go-ethereum-1.10.2\\\\core\\\\state_processor.go L50// Process processes the state changes according to the Ethereum rules by running// the transaction messages using the statedb and applying any rewards to both// the processor (coinbase) and any included uncles.//// Process returns the receipts and logs accumulated during the process and// returns the amount of gas that was used in the process. If any of the// transactions failed to execute due to insufficient gas it will return an error.func (p *StateProcessor) Process(block *types.Block, statedb *state.StateDB, cfg vm.Config) (types.Receipts, []*types.Log, uint64, error) {var (receipts types.ReceiptsusedGas = new(uint64)header = block.Header()allLogs []*types.Loggp = new(GasPool).AddGas(block.GasLimit()))// Mutate the block and state according to any hard-fork specsif p.config.DAOForkSupport && p.config.DAOForkBlock != nil && p.config.DAOForkBlock.Cmp(block.Number()) == 0 {misc.ApplyDAOHardFork(statedb)}blockContext := NewEVMBlockContext(header, p.bc, nil)vmenv := vm.NewEVM(blockContext, vm.TxContext{}, statedb, p.config, cfg)// Iterate over and process the individual transactionsfor i, tx := range block.Transactions() {msg, err := tx.AsMessage(types.MakeSigner(p.config, header.Number))if err != nil {return nil, nil, 0, err}statedb.Prepare(tx.Hash(), block.Hash(), i)receipt, err := applyTransaction(msg, p.config, p.bc, nil, gp, statedb, header, tx, usedGas, vmenv)if err != nil {return nil, nil, 0, fmt.Errorf(\\\"could not apply tx %d [%v]: %w\\\", i, tx.Hash().Hex(), err)}receipts = append(receipts, receipt)allLogs = append(allLogs, receipt.Logs...)}// Finalize the block, applying any consensus engine specific extras (e.g. block rewards)p.engine.Finalize(p.bc, header, statedb, block.Transactions(), block.Uncles())return receipts, allLogs, *usedGas, nil}
之后如果验证成功,调用WriteBlockWithState将这个区块插入数据库,然后写入规范链,同时处理分叉问题
// Update the metrics touched during block validationaccountHashTimer.Update(statedb.AccountHashes) // Account hashes are complete, we can mark themstorageHashTimer.Update(statedb.StorageHashes) // Storage hashes are complete, we can mark themblockValidationTimer.Update(time.Since(substart) - (statedb.AccountHashes + statedb.StorageHashes - triehash))// Write the block to the chain and get the status.substart = time.Now()status, err := bc.writeBlockWithState(block, receipts, logs, statedb, false)atomic.StoreUint32(&followupInterrupt, 1)if err != nil {return it.index, err}// Update the metrics touched during block commitaccountCommitTimer.Update(statedb.AccountCommits) // Account commits are complete, we can mark themstorageCommitTimer.Update(statedb.StorageCommits) // Storage commits are complete, we can mark themsnapshotCommitTimer.Update(statedb.SnapshotCommits) // Snapshot commits are complete, we can mark themblockWriteTimer.Update(time.Since(substart) - statedb.AccountCommits - statedb.StorageCommits - statedb.SnapshotCommits)blockInsertTimer.UpdateSince(start)switch status {case CanonStatTy:log.Debug(\\\"Inserted new block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"uncles\\\", len(block.Uncles()), \\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(),\\\"elapsed\\\", common.PrettyDuration(time.Since(start)),\\\"root\\\", block.Root())lastCanon = block// Only count canonical blocks for GC processing timebc.gcproc += proctimecase SideStatTy:log.Debug(\\\"Inserted forked block\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"diff\\\", block.Difficulty(), \\\"elapsed\\\", common.PrettyDuration(time.Since(start)),\\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(), \\\"uncles\\\", len(block.Uncles()),\\\"root\\\", block.Root())default:// This in theory is impossible, but lets be nice to our future selves and leave// a log, instead of trying to track down blocks imports that don\\\'t emit logs.log.Warn(\\\"Inserted block with unknown status\\\", \\\"number\\\", block.Number(), \\\"hash\\\", block.Hash(),\\\"diff\\\", block.Difficulty(), \\\"elapsed\\\", common.PrettyDuration(time.Since(start)),\\\"txs\\\", len(block.Transactions()), \\\"gas\\\", block.GasUsed(), \\\"uncles\\\", len(block.Uncles()),\\\"root\\\", block.Root())}stats.processed++stats.usedGas += usedGasdirty, _ := bc.stateCache.TrieDB().Size()stats.report(chain, it.index, dirty)}......
WriteBlockWithState的功能是将一个区块写入数据库和规范链,在这里首先获取父区块总难度(ptd),加上block的difficulty,计算新的total difficulty值,并写入数据库,之后调用WriteBlock(batch, block) 把block的body和header写入数据库,调用state.Commit(bc.chainConfig.IsEIP158(block.Number()))把状态写入数据库并获取到状态root,之后按规则处理bc.stateCache缓存,并清理垃圾回收器,如果发现block的父区块不是本地当前最新区块,调用bc.reorg(currentBlock, block),如果新区块比老区块td高,则把高出来的区块插入到blockChain:
// filedir:go-ethereum-1.10.2\\\\core\\\\blockchain.go L1500// writeBlockWithState writes the block and all associated state to the database,// but is expects the chain mutex to be held.func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {bc.wg.Add(1)defer bc.wg.Done()// Calculate the total difficulty of the blockptd := bc.GetTd(block.ParentHash(), block.NumberU64()-1)if ptd == nil {return NonStatTy, consensus.ErrUnknownAncestor}// Make sure no inconsistent state is leaked during insertioncurrentBlock := bc.CurrentBlock()localTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())externTd := new(big.Int).Add(block.Difficulty(), ptd)// Irrelevant of the canonical status, write the block itself to the database.//// Note all the components of block(td, hash->number map, header, body, receipts)// should be written atomically. BlockBatch is used for containing all components.blockBatch := bc.db.NewBatch()rawdb.WriteTd(blockBatch, block.Hash(), block.NumberU64(), externTd)rawdb.WriteBlock(blockBatch, block)rawdb.WriteReceipts(blockBatch, block.Hash(), block.NumberU64(), receipts)rawdb.WritePreimages(blockBatch, state.Preimages())if err := blockBatch.Write(); err != nil {log.Crit(\\\"Failed to write block into disk\\\", \\\"err\\\", err)}// Commit all cached state changes into underlying memory database.root, err := state.Commit(bc.chainConfig.IsEIP158(block.Number()))if err != nil {return NonStatTy, err}triedb := bc.stateCache.TrieDB()// If we\\\'re running an archive node, always flushif bc.cacheConfig.TrieDirtyDisabled {if err := triedb.Commit(root, false, nil); err != nil {return NonStatTy, err}} else {// Full but not archive node, do proper garbage collectiontriedb.Reference(root, common.Hash{}) // metadata reference to keep trie alivebc.triegc.Push(root, -int64(block.NumberU64()))if current := block.NumberU64(); current > TriesInMemory {// If we exceeded our memory allowance, flush matured singleton nodes to diskvar (nodes, imgs = triedb.Size()limit = common.StorageSize(bc.cacheConfig.TrieDirtyLimit) * 1024 * 1024)if nodes > limit || imgs > 4*1024*1024 {triedb.Cap(limit - ethdb.IdealBatchSize)}// Find the next state trie we need to commitchosen := current - TriesInMemory// If we exceeded out time allowance, flush an entire trie to diskif bc.gcproc > bc.cacheConfig.TrieTimeLimit {// If the header is missing (canonical chain behind), we\\\'re reorging a low// diff sidechain. Suspend committing until this operation is completed.header := bc.GetHeaderByNumber(chosen)if header == nil {log.Warn(\\\"Reorg in progress, trie commit postponed\\\", \\\"number\\\", chosen)} else {// If we\\\'re exceeding limits but haven\\\'t reached a large enough memory gap,// warn the user that the system is becoming unstable.if chosen < lastWrite+TriesInMemory && bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit {log.Info(\\\"State in memory for too long, committing\\\", \\\"time\\\", bc.gcproc, \\\"allowance\\\", bc.cacheConfig.TrieTimeLimit, \\\"optimum\\\", float64(chosen-lastWrite)/TriesInMemory)}// Flush an entire trie and restart the counterstriedb.Commit(header.Root, true, nil)lastWrite = chosenbc.gcproc = 0}}// Garbage collect anything below our required write retentionfor !bc.triegc.Empty() {root, number := bc.triegc.Pop()if uint64(-number) > chosen {bc.triegc.Push(root, number)break}triedb.Dereference(root.(common.Hash))}}}// If the total difficulty is higher than our known, add it to the canonical chain// Second clause in the if statement reduces the vulnerability to selfish mining.// Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdfreorg := externTd.Cmp(localTd) > 0currentBlock = bc.CurrentBlock()if !reorg && externTd.Cmp(localTd) == 0 {// Split same-difficulty blocks by number, then preferentially select// the block generated by the local miner as the canonical block.if block.NumberU64() < currentBlock.NumberU64() {reorg = true} else if block.NumberU64() == currentBlock.NumberU64() {var currentPreserve, blockPreserve boolif bc.shouldPreserve != nil {currentPreserve, blockPreserve = bc.shouldPreserve(currentBlock), bc.shouldPreserve(block)}reorg = !currentPreserve && (blockPreserve || mrand.Float64() < 0.5)}}if reorg {// Reorganise the chain if the parent is not the head blockif block.ParentHash() != currentBlock.Hash() {if err := bc.reorg(currentBlock, block); err != nil {return NonStatTy, err}}status = CanonStatTy} else {status = SideStatTy}// Set new head.if status == CanonStatTy {bc.writeHeadBlock(block)}bc.futureBlocks.Remove(block.Hash())if status == CanonStatTy {bc.chainFeed.Send(ChainEvent{Block: block, Hash: block.Hash(), Logs: logs})if len(logs) > 0 {bc.logsFeed.Send(logs)}// In theory we should fire a ChainHeadEvent when we inject// a canonical block, but sometimes we can insert a batch of// canonicial blocks. Avoid firing too much ChainHeadEvents,// we will fire an accumulated ChainHeadEvent and disable fire// event here.if emitHeadEvent {bc.chainHeadFeed.Send(ChainHeadEvent{Block: block})}} else {bc.chainSideFeed.Send(ChainSideEvent{Block: block})}return status, nil}
完成区块的验证、添加、数据库的更新、report事件之后跳出循环,之后如果有未来区块则调用addFutureBlock检查是否超过最大允许区块处理数量上限,之后更新区块并通过for循环继续检索区块:
// Any blocks remaining here? The only ones we care about are the future onesif block != nil && errors.Is(err, consensus.ErrFutureBlock) {if err := bc.addFutureBlock(block); err != nil {return it.index, err}block, err = it.next()for ; block != nil && errors.Is(err, consensus.ErrUnknownAncestor); block, err = it.next() {if err := bc.addFutureBlock(block); err != nil {return it.index, err}stats.queued++}}stats.ignored += it.remaining()return it.index, err}
分叉处理
这里我们补充介绍一下reorg()函数,该函数主要用于处理分叉,它的作用就是将原来的分叉链设置成规范链:
// filedir:go-ethereum-1.10.2\\\\core\\\\blockchain.go L2124// reorg takes two blocks, an old chain and a new chain and will reconstruct the// blocks and inserts them to be part of the new canonical chain and accumulates// potential missing transactions and post an event about them.func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {var (newChain types.BlocksoldChain types.BlockscommonBlock *types.BlockdeletedTxs types.TransactionsaddedTxs types.TransactionsdeletedLogs [][]*types.LogrebirthLogs [][]*types.Log// collectLogs collects the logs that were generated or removed during// the processing of the block that corresponds with the given hash.// These logs are later announced as deleted or reborncollectLogs = func(hash common.Hash, removed bool) {number := bc.hc.GetBlockNumber(hash)if number == nil {return}receipts := rawdb.ReadReceipts(bc.db, hash, *number, bc.chainConfig)var logs []*types.Logfor _, receipt := range receipts {for _, log := range receipt.Logs {l := *logif removed {l.Removed = true} else {}logs = append(logs, &l)}}if len(logs) > 0 {if removed {deletedLogs = append(deletedLogs, logs)} else {rebirthLogs = append(rebirthLogs, logs)}}}// mergeLogs returns a merged log slice with specified sort order.mergeLogs = func(logs [][]*types.Log, reverse bool) []*types.Log {var ret []*types.Logif reverse {for i := len(logs) - 1; i >= 0; i-- {ret = append(ret, logs[i]...)}} else {for i := 0; i < len(logs); i++ {ret = append(ret, logs[i]...)}}return ret})// Reduce the longer chain to the same number as the shorter oneif oldBlock.NumberU64() > newBlock.NumberU64() {// Old chain is longer, gather all transactions and logs as deleted onesfor ; oldBlock != nil && oldBlock.NumberU64() != newBlock.NumberU64(); oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1) {oldChain = append(oldChain, oldBlock)deletedTxs = append(deletedTxs, oldBlock.Transactions()...)collectLogs(oldBlock.Hash(), true)}} else {// New chain is longer, stash all blocks away for subsequent insertionfor ; newBlock != nil && newBlock.NumberU64() != oldBlock.NumberU64(); newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) {newChain = append(newChain, newBlock)}}if oldBlock == nil {return fmt.Errorf(\\\"invalid old chain\\\")}if newBlock == nil {return fmt.Errorf(\\\"invalid new chain\\\")}// Both sides of the reorg are at the same number, reduce both until the common// ancestor is foundfor {// If the common ancestor was found, bail outif oldBlock.Hash() == newBlock.Hash() {commonBlock = oldBlockbreak}// Remove an old block as well as stash away a new blockoldChain = append(oldChain, oldBlock)deletedTxs = append(deletedTxs, oldBlock.Transactions()...)collectLogs(oldBlock.Hash(), true)newChain = append(newChain, newBlock)// Step back with both chainsoldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1)if oldBlock == nil {return fmt.Errorf(\\\"invalid old chain\\\")}newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1)if newBlock == nil {return fmt.Errorf(\\\"invalid new chain\\\")}}// Ensure the user sees large reorgsif len(oldChain) > 0 && len(newChain) > 0 {logFn := log.Infomsg := \\\"Chain reorg detected\\\"if len(oldChain) > 63 {msg = \\\"Large chain reorg detected\\\"logFn = log.Warn}logFn(msg, \\\"number\\\", commonBlock.Number(), \\\"hash\\\", commonBlock.Hash(),\\\"drop\\\", len(oldChain), \\\"dropfrom\\\", oldChain[0].Hash(), \\\"add\\\", len(newChain), \\\"addfrom\\\", newChain[0].Hash())blockReorgAddMeter.Mark(int64(len(newChain)))blockReorgDropMeter.Mark(int64(len(oldChain)))blockReorgMeter.Mark(1)} else {log.Error(\\\"Impossible reorg, please file an issue\\\", \\\"oldnum\\\", oldBlock.Number(), \\\"oldhash\\\", oldBlock.Hash(), \\\"newnum\\\", newBlock.Number(), \\\"newhash\\\", newBlock.Hash())}// Insert the new chain(except the head block(reverse order)),// taking care of the proper incremental order.for i := len(newChain) - 1; i >= 1; i-- {// Insert the block in the canonical way, re-writing historybc.writeHeadBlock(newChain[i])// Collect reborn logs due to chain reorgcollectLogs(newChain[i].Hash(), false)// Collect the new added transactions.addedTxs = append(addedTxs, newChain[i].Transactions()...)}// Delete useless indexes right now which includes the non-canonical// transaction indexes, canonical chain indexes which above the head.indexesBatch := bc.db.NewBatch()for _, tx := range types.TxDifference(deletedTxs, addedTxs) {rawdb.DeleteTxLookupEntry(indexesBatch, tx.Hash())}// Delete any canonical number assignments above the new headnumber := bc.CurrentBlock().NumberU64()for i := number + 1; ; i++ {hash := rawdb.ReadCanonicalHash(bc.db, i)if hash == (common.Hash{}) {break}rawdb.DeleteCanonicalHash(indexesBatch, i)}if err := indexesBatch.Write(); err != nil {log.Crit(\\\"Failed to delete useless indexes\\\", \\\"err\\\", err)}// If any logs need to be fired, do it now. In theory we could avoid creating// this goroutine if there are no events to fire, but realistcally that only// ever happens if we\\\'re reorging empty blocks, which will only happen on idle// networks where performance is not an issue either way.if len(deletedLogs) > 0 {bc.rmLogsFeed.Send(RemovedLogsEvent{mergeLogs(deletedLogs, true)})}if len(rebirthLogs) > 0 {bc.logsFeed.Send(mergeLogs(rebirthLogs, false))}if len(oldChain) > 0 {for i := len(oldChain) - 1; i >= 0; i-- {bc.chainSideFeed.Send(ChainSideEvent{Block: oldChain[i]})}}return nil}
主要过程如下:
Step 1:找到新链和老链的共同祖先,如果老分支比新分支区块高度高,则减少老分支直到与新分支高度相同为止,同时并收集老分支上的交易和日志,如果是新分支高于老分支,则减少新分支,等到达共同高度后,去找到共同祖先(共同回退),继续收集日志和事件
// Reduce the longer chain to the same number as the shorter oneif oldBlock.NumberU64() > newBlock.NumberU64() {// Old chain is longer, gather all transactions and logs as deleted onesfor ; oldBlock != nil && oldBlock.NumberU64() != newBlock.NumberU64(); oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1) {oldChain = append(oldChain, oldBlock)deletedTxs = append(deletedTxs, oldBlock.Transactions()...)collectLogs(oldBlock.Hash(), true)}} else {// New chain is longer, stash all blocks away for subsequent insertionfor ; newBlock != nil && newBlock.NumberU64() != oldBlock.NumberU64(); newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) {newChain = append(newChain, newBlock)}}if oldBlock == nil {return fmt.Errorf(\\\"invalid old chain\\\")}if newBlock == nil {return fmt.Errorf(\\\"invalid new chain\\\")}// Both sides of the reorg are at the same number, reduce both until the common// ancestor is foundfor {// If the common ancestor was found, bail outif oldBlock.Hash() == newBlock.Hash() {commonBlock = oldBlockbreak}// Remove an old block as well as stash away a new blockoldChain = append(oldChain, oldBlock)deletedTxs = append(deletedTxs, oldBlock.Transactions()...)collectLogs(oldBlock.Hash(), true)newChain = append(newChain, newBlock)// Step back with both chainsoldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1)if oldBlock == nil {return fmt.Errorf(\\\"invalid old chain\\\")}newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1)if newBlock == nil {return fmt.Errorf(\\\"invalid new chain\\\")}}// Ensure the user sees large reorgsif len(oldChain) > 0 && len(newChain) > 0 {logFn := log.Infomsg := \\\"Chain reorg detected\\\"if len(oldChain) > 63 {msg = \\\"Large chain reorg detected\\\"logFn = log.Warn}logFn(msg, \\\"number\\\", commonBlock.Number(), \\\"hash\\\", commonBlock.Hash(),\\\"drop\\\", len(oldChain), \\\"dropfrom\\\", oldChain[0].Hash(), \\\"add\\\", len(newChain), \\\"addfrom\\\", newChain[0].Hash())blockReorgAddMeter.Mark(int64(len(newChain)))blockReorgDropMeter.Mark(int64(len(oldChain)))blockReorgMeter.Mark(1)} else {log.Error(\\\"Impossible reorg, please file an issue\\\", \\\"oldnum\\\", oldBlock.Number(), \\\"oldhash\\\", oldBlock.Hash(), \\\"newnum\\\", newBlock.Number(), \\\"newhash\\\", newBlock.Hash())}
step 2:将新链插入到规范链中,同时收集插入到规范链中的所有交易
// Insert the new chain(except the head block(reverse order)),// taking care of the proper incremental order.for i := len(newChain) - 1; i >= 1; i-- {// Insert the block in the canonical way, re-writing historybc.writeHeadBlock(newChain[i])// Collect reborn logs due to chain reorgcollectLogs(newChain[i].Hash(), false)// Collect the new added transactions.addedTxs = append(addedTxs, newChain[i].Transactions()...)}
step 3:之后找出待删除列表和待添加列表中的差异,删除那些不在新链上的交易在数据库中的查询入口
// Delete useless indexes right now which includes the non-canonical// transaction indexes, canonical chain indexes which above the head.indexesBatch := bc.db.NewBatch()for _, tx := range types.TxDifference(deletedTxs, addedTxs) {rawdb.DeleteTxLookupEntry(indexesBatch, tx.Hash())}// Delete any canonical number assignments above the new headnumber := bc.CurrentBlock().NumberU64()for i := number + 1; ; i++ {hash := rawdb.ReadCanonicalHash(bc.db, i)if hash == (common.Hash{}) {break}rawdb.DeleteCanonicalHash(indexesBatch, i)}if err := indexesBatch.Write(); err != nil {log.Crit(\\\"Failed to delete useless indexes\\\", \\\"err\\\", err)}
Step 4:向外发送区块被重新组织的事件,以及日志删除事件
// If any logs need to be fired, do it now. In theory we could avoid creating// this goroutine if there are no events to fire, but realistcally that only// ever happens if we\\\'re reorging empty blocks, which will only happen on idle// networks where performance is not an issue either way.if len(deletedLogs) > 0 {bc.rmLogsFeed.Send(RemovedLogsEvent{mergeLogs(deletedLogs, true)})}if len(rebirthLogs) > 0 {bc.logsFeed.Send(mergeLogs(rebirthLogs, false))}if len(oldChain) > 0 {for i := len(oldChain) - 1; i >= 0; i-- {bc.chainSideFeed.Send(ChainSideEvent{Block: oldChain[i]})}}return nil
文末小结
本篇文章详细介绍了区块和区块链的核心数据结构,并对区块和区块链的基本操作进行了分析,包括:创世区块的生成、新建区块的流程、区块验证的过程、区块难度目标的计算、区块链的构建、区块的插入、分叉处理等等,后续我们将对公链中的交易部分进行介绍~
原创文章,作者:七芒星实验室,如若转载,请注明出处:https://www.sudun.com/ask/34143.html