PARSIQ Community Q+A #45

Tom Tirman & Daniil Romazanov: Deep dive into the PARSIQ Network

PARSIQ
PARSIQ

--

Greetings PRQrew and Qties! 🖖🚀

We’ve got another community AMA for you. This time Tom Tirman (CEO) was joined by Danny Romazanov, who was previously the PARSIQ Product Owner, but has been recently promoted to CTO! /based_danny

As usual, before getting into the questions we wanted to remind you of some of the exciting things PARSIQ has going on. First, PARSIQ is participating in Into the IQVerse: Community Battle Royale! This campaign, hosted by IQ Protocol, features some huge prizes, with the grand prize amounting to over $10k! The competition will be fierce — and the only way to win is to band together as a community — so it’s time to assemble the PRQrew, the Qties, SWARMS, and all of the friends we’ve made along the way!

In addition to that, the team has also been making some great progress towards our expanded vision, making PARSIQ Network the go-to solution for Web3 backend. You can read about that vision here, but Danny and Tom also discuss it in the AMA below.

Finally, we are still offering benefits to those of you staking $PRQ. Specifically, there is a 2x Yield Boost Incentive and a Free $IQT Incentive available during Q2, 2022.

If you’d prefer to watch the AMA on YouTube, click here. Otherwise, the questions are all transcribed below, with the bolded questions being links to the time-stamped section of the video.

Now, let’s get on with the questions…

Introduction

Tom: Hey everyone, hope you’re all having a good Thursday. Welcome to PARSIQ’s AMA. I’m here today with Daniil (Danny) who is the product owner of the PARSIQ platform. It’s been a busy few weeks here at PARSIQ HQ, because we are heavily building everything that we discussed on the last AMA, our new product, the PARSIQ network and it’s first and most important parts like the Tsunami API and after that the Data lakes and much more so that anyone in the space can get easy access to blockchain data both real-time and historical, without the limitations of other solutions.

Danny’s Promotion

Let’s take this time to dive deeper into the technology, into the significance and key differences from what’s on the market and more. Before I turn it over to Danny and before we dive into any community questions then I want to make a quick announcement here. So, the CTO of PARSIQ, Alan, is shifting roles a bit, most of his time will be going to the IQ Protocol, he will stay still involved with PARSIQ at a high level for architecture and stuff but specifically for the PARSIQ network, the flagship product that we’re building. Daniil is now the CTO of the PARSIQ network and he will oversee the development of this extremely important product.

Danny: Hi Tom, glad to be here. I’m super grateful for the faith and the trust PARSIQ has in me. I’ll do my best to make PARSIQ network a great product. For me it’s been quite a busy year. I started just over a year ago and all this time it was like maneuvering between everything like strategic questions, vision questions to working with the tech team on architecture, even doing tasks on my own for some projects when dev’s didn’t have time. In the beginning when we didn’t have someone always on call for support for the clients I did that as well. Yeah it’s been very eventful and I’m glad that I earned this trust and can proceed in the CTO role, so yeah, I’m super excited.

Tom: Congrats, Danny you deserve it, it’s been quite a journey you’ve been with us for over a year now and really the work that you’ve done has helped the project a lot so we have of course high hopes for you in delivering the most important milestones yet to come. So you didn’t get a pay raise, right?

Danny: Haha, well you know the answer to that.

Tom: Yeah you didn’t, but more responsibility, more stress.

Danny: Work was always our everything.

Tsunami API

Tom: Yeah true, true, we’re in it for the tech. So maybe at a high level you can explain the first milestone, the Tsunami API, why it is important and what it consists of and who will be using it.

Danny: Right, so yeah let’s start with Tsunami API. Basically the Tsunami API is our core product, and it’s not only the core product for the market but it’s the core product for us as well because everything that we do and that we will be doing is in the short to mid term range. Everything is based on blockchain data and for us it is very important to build a tool which will be both convenient to use for us to build on top of this data and for others as well.

Tsunami API actually has sort of a lot of competition out there but for us even though it’s the core product, we do differentiate in different things. But even with the Tsunami API we’ve made a lot of improvements. I’ve been talking to a lot of projects which are using different solutions out there and I have collected a lot of feedback, different cons for using these different solutions. Like every project, every solution has different cons and we were able to eliminate them. We don’t have these cons and for us it was quite important to make it as efficient and as fast as possible and we have achieved these goals. Our performance testing is showing really great results and I think we will be sharing them quite soon. Definitely before launch in July.

We were actually able to bring two parts together of historical and real-time data. There’s a big difference between different real-time solutions. Some solutions just allow you to connect to a node and get the stream of events from there like transactions that are happening. But with this approach there are issues and complications because you have to take care of data consistency, meaning that if for some reason your service or node goes offline for whatever time period, when you run your software again you will have to eliminate the gap which happened while the service was offline. And this can get very tricky especially with fast blockchains because when you have just like a full node or even a basic node it gets really complicated because they do not store a lot of state in the memory, meaning that you will not be able to catch up with the history.

So we have thought of it all and we have built all the mechanisms in the way that none of your data will ever get lost somewhere even if your service is offline, even if our service for some reason was offline. Which didn’t happen during my last year, but we have taken care of it all. And with historical data, it’s a really efficient and super-fast API that actually allows you to extract enormous amounts of data from the blockchain. So let me just bring an example, so you want to collect some statistics on your protocol or say you’re a data scientist and you want to explore some things within a protocol and it gets complicated because you have to do a lot of queries to receive the data, you have to get entries like one by one or hundred by hundred and if it’s a big protocol then it’s going to get complicated.

There are use cases when people need let’s say they need the whole history of a pair on Uniswap and for this use case, for them to get this data it actually gets really tricky and I’ve been talking to some projects who are using 3–5 solutions just to scrape pieces of data from everywhere and we have designed our historical API so that if you need a lot of data you can get it in one request, in one file. It depends on your needs of course but you can get it with pagination or you can get it like one single csv file because you want to store it on your side. We actually took a lot of time to analyse the market, analyse the solutions that are out there and take the best and build the best solution for these use cases

Tom: Yeah so pretty much everything that we’ve learned by working with different partners, by hearing their needs, it’s all rolled into one, all their headaches with blockchain data that we heard from everyone that really requires heavy amounts of data. That’s been taken into account. We have a really big, in my opinion, pipeline of projects already onboard to test the alpha, the beta versions of the product. I know Danny you have been on a number of calls for early validation of what they would like to use, what was the general sentiment, what was the general feedback, in short?

Danny: So yeah I’ve been on a lot of calls talking to projects, building a potential pipeline and people are really excited, I’ve been showing some diagrams, talking about how the tech actually works. What kind of architecture do we use and why do we use it? And well there are kind of two types of clients, those that are going to use or want to use our Tsunami API and for them, from the calls that i had, we actually cover 100% of use cases that these projects are actually talking about.

So like i said we did our homework and studied the market a lot so even with what we have with input we collected previously we already collected most of the use cases from the projects that i talked to. For some projects it’s not the Tsunami API, it’s more about the data lakes, so they have their own protocols or are using other protocols and they want to collect data on these protocols. And data lake is kind of protocol and project specific, fault tolerant and rework of their environment with all the business logic of this protocol inside and it has its own API layer so protocols and their users, whatever the protocol and project decides, can have access to this data.

So we are trying to provide the maximum comfortable way for the protocols to access data of their protocols. So they don’t have to take care of infrastructure because infrastructure costs are huge and it’s getting only bigger. And we’re also taking care, we’re working with the protocols, we’re helping them to build. So this is kind of R&D type of stuff, we research the protocol, work with the protocol and we help to aggregate and work to decode all the data that they have. It depends on the protocol as there are a lot of different use cases. So we want to make the life of the protocols as easy as possible so they don’t have to take care of their data, so they can just come to us and work with us so we can provide all the data and APIs that they need.

Tom: What’s your view on centralized vs distributed vs decentralized kind of debate, because PARSIQ falls somewhere in the middle of that so what’s your opinion on people coming and saying, have there been issues when someone says “You’re not fully decentralized, so we cannot use you”

Danny: Yeah so I’ve been talking to a lot of projects and I’ve been honest with them, I’ve been showing the diagrams of how everything works and none of the projects actually asked me, “Why are you not fully decentralized?” because we’re not fully decentralized. And people do understand that when working with data in blockchain, that produces a lot of data, some blockchains like 100gb per day and there is no efficient way, right now at least to make it fully decentralized and we do not want to sacrifice efficiency just to swim in all the buzzwords that the crypto market has.

Because all these buzzwords, they’re very hot topic today but with time and as crypto gets adoption, some things will settle a little bit and it will not be the same buzzword game, it will be about the efficiency of the solutions but not just blindly following some trends. This is how it is right now and there is no other way for an industry that young. But with time this picture will change and efficiency will be valued much more than just throwing around some beautiful words.

Tom: Yeah exactly and so far what I’ve heard is that you’re giving us data from these centralized networks, we don’t need you to be decentralized for this and the fact that our network is distributed, that helps a lot because there is no need for fully decentralized data, at least that’s how most projects feel and we have surveyed a lot of them.

Danny: Yeah I can add something too here. So with our system being distributed, we actually built it this way so that every component, every layer is tolerant, so it doesn’t depend whether some component dies, all data is wiped off. For some reason, this never happened but still every component can reinitiate itself in a very short time and resume work as expected, so we built it this way so that no data is ever lost. And in the future there is this question of “Can you trust the data?”

And of course we do collect everything from our pool of nodes and for every blockchain for are a couple of nodes, but in the future when the time comes, we want to provide the user with some kind of utility or interface where you will be able to actually check the data consistency between the data that you have on the blockchain and the data that you have on our side. So we want to be as transparent as possible and we do not hide anything. We do have our data, we do collect it from the nodes, it’s built so that it’s not possible to tamper with this data and once it gets in our databases, it cannot be altered in any way. But yeah we still want to provide some way for projects or users to be sure about our data consistency and integrity.

Tom: Before we jump into the community questions, one last thing. What would you say are the key differentiators between what we’re building, the PARSIQ network, Tsunami API and everything related, compared to other blockchain data providers in the space?

Danny: Well, the biggest one. For APIs there is a lot of competition out there but if we talk about data lakes, it is the biggest differentiator and no one offers you cusTom, tailored DeFi. No one offers this in the space, to work closely with you and build your data for you. And it is something that we are ready and willing to do. And yeah so this is one of the things and another thing, even for Tsunami that I mentioned previously, this combination of historical and real-time works together so that you can easily query the data from the past and then immediately start receiving all the data as it happens.

And you don’t have to think about it on your own, all these problems are solved on our end. And even as I touched upon a little previously, we built the system so that nothing gets lost and if ever your services are offline, we provide 100% chance of delivering the events. Even if something like, for some reason you’ve been down for hours. Whatever happens you will be able to go to the interface to see what the errors are. Replay these errors. Because for many projects, it’s very critical that they get the right events at the right time.

Or at least they get the events even in the sense that even if you were offline for some time you still want to get all the events because otherwise your data, your interfaces and user experience, they will get inconsistent. And I believe we have taken care of it all.

Tom: Yeah we’ve basically taken a look at the current problems existing in the blockchain data space and tried to solve them all on top of the technology we’ve already built. And package it into one accessible, easy to use product and network. In addition all kinds of other perks like data storage and down the line on the roadmap we have the hybrid model for building your backends and many other things so we, using PARSIQ the goal is that people building Web3 applications, Web3 protocols wouldn’t’ have to use, 5, 6, 7 different services. They can get all their data needs, all their backend needs from one place. They just need the smart contracts and they may need the UI, that’s it.

Tom: So let’s jump in, let’s see first question…

Community Questions

Is Flux on your Radar, any immediate plans to work with them?

Tom: So i haven’t spoken to Flux, i don’t think our team has spoken to Flux yet, might be worth looking into it. I know Flux but no plans right now.

Will $PRQ be on the Cronos chain and will IQ be accessible on the Cronos network?

Tom: Good question, likely, very likely to both of them.

Recently PARSIQ 101 article stated that Tsunami is launching in July, PARSIQ doesn’t like putting dates on things but have with Tsunami, why is this?

Tom: Well on the roadmap it’s like beginning of Q3 of this year so, beginning is July but it’s sort of malleable, that’s the goal, it’s not an exact date, as you can see there are no exact dates on the roadmap but July is the goal, i think we should reach it. Danny, you wanted to add something about the deadline?

Danny: Yeah so everything was thoroughly planned and decoupled, all the tasks so the estimate is very realistic and right now we are on schedule and things are progressing exactly as we expected them to progress, so i’m so far positive about the beginning of Q3, July. We’re actually starting alpha testing soon with some big players out there that already want to use our API so I’m kind of excited about that. I hope that I will be able to share more details about alpha testing, it definitely will not be publicly available at that time because that’s relatively soon but then maybe we can share some thoughts from the projects.

Tom: Yeah, it seems to be on track.

Have we spoken to Coinbase? If yes, what’s the score?

Tom: We’ve pursued that listing for over a year. What’s the score? I think it’s pretty good, as you’ve seen from the article that was published. Score is positive.

With the Tsunami API will ParsiQL become mainly an internal language for developers?

Tom: I guess you could say that, Danny feel free to give some details.

Danny: So yeah just to be precise here, it’s not the main language we’ve developed in but the ParsiQL is technology that we’ve built and we’ve come to the conclusion that it’s technology that we actually built for ourselves to conveniently and easily work with the data in blockchain. And we’re still all about the data. So it is one of our biggest utilities inside the team.

When do you think PARSIQ will be more in demand, will it be when Web3 takes off or something else?

Tom: Well I think we’re pretty well positioned already, there are a lot of teams building in the space, building Dapps, building protocols, all kinds of applications and they all need to build their backends. And I think seeing the response on the pipeline, whenever the first page of milestones of the PARSIQ network will be delivered, will show a lot of adoption. When Web3 reaches the adoption curve then the real potential will come out but I think we’re very well positioned already with the PARSIQ network.

It was said that borrowing would be added more consistently, this seems to have slowed down, why?

Tom: APY is frontloaded right, it gives out more rewards in the beginning of the rental rather than the end, so this is the reason why it varies so much. There are different periods, there are periods where subscriptions have to be renewed. But I can tell you that the steady flow will continue, there was a couple weeks of slower flow but now i think you’ll be pretty happy with the next weeks and months.

In the recent medium article it states “consumer facing mobile app connected to the flagship product” will this also be the same app that was discussed for AWS partners or a separate app in addition?

Tom: I wonder if this is meant to be about the PARSIQ Atlas, because PARSIQ Atlas it will be a blockchain explorer, but it will be not like Etherscan, it will be more auTomated and will be DeFi specific, so think of it like a DeFi and Dapp explorer for everyone to use. So you can go there, choose a Dapp, an application, a protocol of your choosing and get data from what is happening inside there. I don’t think anyone does it.

Danny: So yeah actually our idea is that, like Ether is a really convenient and big tool but on the other hand if you want to get meaningful data, it gets complicated. Our idea is that, us being data providers, integrating different protocols, we can actually give a much better overview of what a certain address is doing within those protocols. Nothing is set in stone here with Atlas but I can see where you can come to this interface and enter an address and see how it interacts with different protocols, in a more graphic way than just seeing a list of transactions. So yeah, kind of excited about that one.

You’ve referred to PARSIQ about the Zapier of Blockchain in the past. What would you compare the Tsunami API to in terms of a similar non-blockchain project?

Tom: So Danny if you take PARSIQ network as a whole, with the data APIs, with the data storage and everything. What would you compare it to, from the traditional tech world? Which product?

Danny: I would say probably Firebase or any infrastructure/ecosystem products. Just trying to solve as many of your kind of backend use cases. I mean probably Firebase is the best analogy here, just being a backend for your everything and we want to be the backend for Web3 applications, having all the data on us and also giving opportunities to store all of the data in our off-chain distributed networks, so projects don’t have to take care of enormous costs for transactions. Which is definitely a poor user experience. And we’ll store it in a safe way on our side and as i mentioned our competence is very advanced in this sense, they can react to reworks and everything that happened on the blockchain.

Can I rent out my own NFT to myself to make use of all the utilities?

Tom: Yeah assuming it’s a multi utility NFT, that’s a cool question. I’ve actually thought about it myself. So the original may not have some of these utilities in these different IQverses and the rented version does, why don’t I rent it out to myself, pay myself the rental fee and enjoy those utilities myself, that’s pretty cool.

Do you have any idea which blockchain projects / protocols will be the first to have their data accessible by the new API?

Tom: Well, I know some of them of course. Others are in talks, obviously we won’t be announcing them until they’ve tested the product.

Danny: Well we can definitely mention blockchains. So we’re launching on Ethereum, Polygon, BSC and Avalanche. Having all of their data, all the historical and everything. And other projects are yet to be announced.

Tom: Yeah the aim will be to integrate most blockchain networks that have adoption.

Which would you most like to see?

Tom: There are different types here, so there will be those themselves who want to access the data of their own Dapp or Protocol. Then there are those who want to access the data of third party applications, third party protocols. Then there will be third parties who want to access the data of a specific protocol.

So eventually i’d say whether there is demand we’ll integrate them all. And when we have the SDK this will allow us to outsource these types of data lake and Dapp integrations and outsource it in a community way through a DAO. Where people can vote which ones to integrate, or projects themselves can come to the DAO and propose, please integrate us. The community will vote where we should spend these resources and integrate. Say we will grant these amounts of funds to developers, third party developers to actually do it.

What happened to the big marketing firm, do you still work with them?

Tom: So, it was a PR firm to be entirely correct. And no we aren’t working with them, haven’t been since last year. We’ve actually expanded our marketing department, we’re doing PR in house, marketing in house and it’s working out quite well so there is no need at this point. We might hire one short-term, when we have some extremely big news to share. And there will be those times, I know for a fact. For those purposes we can hire them to run these campaigns for 2 months for example. But I feel we’re doing a great job in house.

Tom: Any closing thoughts on the product side, Danny?

Danny: Yeah it’s exciting times. It’s so cool to see the whole team being excited about what we’re doing. I don’t know how to describe it, everyone is chatting and positive you know. I’m talking about like the tech team, they’re super excited to be building what we’re building. That’s amazing to see, all the energy.

Tom: Yeah absolutely I mean the team is in exceptional form. The product team really loves what we’re doing with the PARSIQ network, the Tsunami API. They’re all in on this. I think this was the absolute best decision to put our main focus on this, using our existing technology. Because everyone is hyped. Since last year the PARSIQ platform reached a certain level of usage and after that, while there was steady growth, it was slow. The growth rate hasn’t been the same from October-November up to February when we decided to make the shift. And I explained the reasons, it was because we couldn’t cover the use cases that clients wanted, so cusTom development work had to be done and so far the validation has been great.

Everyone we’ve spoken to, pretty much everyone wanted to use the PARSIQ network, so we’re off to a great start. We already have the tech for it, so now it’s about building the API, building the front-end and getting it live. Onboarding as many projects as we can, start working on data lakes, we’ll work with pretty much every protocol and Dapp in the space. At least those being used anyway. So that means a huge amount of collaborations with other projects which will allow us to tap into their community as well including the developer community. It’s been a lot. And we have a last question from Dominik, Hi Dominik.

There were a lot of partnership announcements last year, this year few. With all the conferences we attended, will this increase soon?

Tom: Well this is exactly the reason, because the shift of product. Believe me when this goes live we’ll have a huge load of Dapps and protocols onboarded. And we’ll be doing announcements with most of them, big and small. Everyone deserves their own data lake.

Tom: Alright I think that’s it for today, regarding the listings which has been a major topic in all AMAs, we’ve had some good progress there as you’ve seen, a listing on Crypto.com, a listing on Huobi, the Coinbase article. So this just shows that it takes time but if you have conviction and actually see things through they’ll happen. And I have no doubt that whichever ones we’ve spoken to, like Binance and Kraken, so on, so forth. They will come as well, it’s just a matter of time. It’s about constantly improving and delivering value for users in the space, being in the picture and eventually you get noticed, it works. Seen it a million times on my own with PARSIQ.

Is there a dedicated tech team to hand hold and seed new companies using PARSIQ?

Tom: No, I mean obviously we have team members from the product team who will offer a lot of support for Dapps and protocols who want their own data lake so we’ll be working closely with them. I don’t know if you would consider it handholding or not. In terms of new companies and who want to build their startups and Web3 applications on PARSIQ technology. We might just make a PARSIQ ecosystem fund and grant system for it. But a little bit later.

What’s next after the Tsunami API?

Danny: For us, since we already have data. What we want to do is just work with the protocols, of course there will be new things coming as well. But first after launching Tsunami, there’ll be a constant expansion of protocols that we’re working with and providing the best data APIs for them and their users. So we’re definitely looking at expansion of data lakes and we’re looking at other products like Atlas, that are built on top of our tech.

Tom: Yeah and there are things that we haven’t really mentioned yet that are in the works but we’ll mention them all in due time.

Conclusion

Tom: Alright, this concludes the AMA for today, thank you all for tuning in and as a lot of you missed it and didn’t have a chance to post your questions, post them in the telegram chat and tag one of our team members so we can take some of them and maybe answer them in written form. Have a great weekend, see you in a couple of weeks!

About PARSIQ

PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API, which will ship in July 2022, will provide blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.

Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube

--

--

PARSIQ
PARSIQ
Editor for

Go-to backend for web3 applications 🌊