This displays your Panoply data warehouse's connection details. In the Panoply navigation menu, click BI Connection.While scheduled emails now provide this information, we can independently verify the values in the email using a public key and epoch data using the following query. The Foundation delegation program requires calculating the amount to return to the Foundation each epoch from the balance of the pool and the number of blocks produced. Was included via a fee transfer (snark work or coinbase receiver) with a high enough amount to cover the ledger creation fee.Ĭalculate the unvested slot for any timed public key Foundation Delegation Analysis.Received a transaction with an amount high enough to cover the ledger creation fee.Included in the Genesis ledger (ledger hash jx7buQVWFLsXTtzRgSxbYcT8EYLS8KCZbLrfDcJxMtyy4thw2Ee).The following query to determine the number of accounts in the ledger demonstrates how accounts are added to the ledger. Or via the previous 100 blocks (this query includes a subquery to first determine the latest height): SELECT COUNT(fee) as total_snarks, MIN(fee) / 1000000000 as minimum_fee, MAX(fee) / 1000000000 as maximum_fee, AVG(fee) / 1000000000 as average_fee, FROM as s, ( SELECT blockheight FROM WHERE canonical = true ORDER BY blockheight DESC LIMIT 1 ) as b WHERE canonical = true AND s.blockheight >= (b.blockheight - 100) Querying Account Data You can use this data to help choose appropriate SNARK fees, for example, by filtering for just the last 24 hours by adding AND datetime >= TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -24 HOUR) to the WHERE clause. Yes, this block really did have a SNARK included for 700 MINA. Maximum, minimum, and average SNARK fees in the canonical chain This query returns the following result, which we could additionally visualize in Data Studio by clicking on the Explore Data link. SELECT datetime_trunc(datetime, DAY) as day, COUNT(statehash) as total_blocks FROM WHERE canonical = true GROUP BY day ORDER BY total_blocks DESC Querying Block DataĮach block has a corresponding datetime field (UTC), which we can use to, for example, group canonical blocks by day. The remainder of the article will provide some sample queries to demonstrate the use of the dataset. There is a generous free tier of 1TB of data processing to get started. While MinaExplorer pays to host the dataset, any queries you run against the dataset are charged against your own personal billing account. If you are only interested in the canonical chain, filter all of your queries WHERE canonical = true to only return canonical blocks/snarks/transactions. This query highlights that the database stores all blocks seen. Number of blocks in the database, grouped by canonical status Access the console via, create a new project if required, and add the data source by selecting Add Data -> Pin a project -> Enter project name and enter minaexplorer. While you can use any GUI that supports BigQuery, such as PopSQL, Tableau, or DataGrip, we will use the BigQuery console to execute our queries for this article. Notably, snarks and user transactions (aka user commands), which are also nested in blocks data, are separated into their own tables for easier querying. The schema of the BigQuery dataset matches that of the MinaExplorer GraphQL API. Data is replicated from MinaExplorer's database (which stores GraphQL subscriptions) with a small latency of no more than a few minutes. Google BigQuery is a cloud-based big data analytics web service for processing very large data sets. MinaExplorer has published its custom archive node dataset to Google BigQuery as a public dataset to resolve this issue. While obtaining this information via scripting is possible, directly querying a database with a SQL query is more accessible and efficient. However, neither offer simple aggregation features to answer questions such as "how many blocks were produced in the last 24 hours". Those who want to query historical data without running their own archive nodes (and associated Mina node(s)) can use historical services like Figment's DataHub or MinaExplorer's archive GraphQL API. See this post on how to set up and configure an archive node for redundancy. The official implementation of an archive node stores its data to a Postgres database, which typically requires the use of recursive queries to determine the canonical chain information. So, if we want to analyze the chain's history (for example, to see individual transactions), we need to obtain it from an archive node. Mina is a succinct blockchain, which means while we can verify the chain's current state using a zero-knowledge proof, the prior history is not retained. An Introduction to MinaExplorer's BigQuery Public Dataset
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |