Since it's spark you probably want to build a fat jar using sbt-assembly. Otherwise the dependencies won't be there.
do you speak russian?
I cant speak russian
there is two ways for you to get packaged artifact running 1. use IDE like intellij, import project and hit run, it will include all dependencies (transitive and provided) and run your app 2. use existing spark cluster, put spark to provided scope, as spark and spark streaming is already in classpath, use sbt assembly plugin, run "sbt assembly", your jar will be in target folder of project. With that jar and spark submit you can run your app on cluster
I need database with high reading speed 100,000 requests per second or more Does anyone know about this? I dont insert i have select
100k RPS? Cache everything you can
I have database of ips and select ip and return location I have 100000 requests per second My records is 10 milions
20 redis node cluster with full replica is all you need
IPs are stored as IP ranges and I have to query between two numbers How can I use Radis?
By using keys that correspond to few ranges. The idea is to compute key from ip which would be able to select few ranges from which you can decide from code
Обсуждают сегодня