Technology moves fast! ⚡ Don't get left behind.🚶 Subscribe to our mailing list to keep up with latest and greatest in open source projects! 🏆

Subscribe to our mailing list


Distributed SQL query engine for big data

Subscribe to updates I use presto

Statistics on presto

Number of watchers on Github 7257
Number of open issues 1250
Average time to close an issue 1 day
Main language Java
Average time to merge a PR 2 days
Open pull requests 406+
Closed pull requests 230+
Last commit over 1 year ago
Repo Created almost 7 years ago
Repo Last Updated over 1 year ago
Size 86.8 MB
Organization / Authorprestodb
Page Updated
Do you use presto? Leave a review!
View open issues (1250)
View presto activity
View on github
Fresh, new opensource launches 🚀🚀🚀
Trendy new open source projects in your inbox! View examples

Subscribe to our mailing list

Evaluating presto for your project? Score Explanation
Commits Score (?)
Issues & PR Score (?)

Presto Build Status

Presto is a distributed SQL query engine for big data.

See the User Manual for deployment instructions and end user documentation.


  • Mac OS X or Linux
  • Java 8 Update 92 or higher (8u92+), 64-bit
  • Maven 3.3.9+ (for building)
  • Python 2.4+ (for running with the launcher script)

Building Presto

Presto is a standard Maven project. Simply run the following command from the project root directory:

./mvnw clean install

On the first build, Maven will download all the dependencies from the internet and cache them in the local repository (~/.m2/repository), which can take a considerable amount of time. Subsequent builds will be faster.

Presto has a comprehensive set of unit tests that can take several minutes to run. You can disable the tests when building:

./mvnw clean install -DskipTests

Running Presto in your IDE


After building Presto for the first time, you can load the project into your IDE and run the server. We recommend using IntelliJ IDEA. Because Presto is a standard Maven project, you can import it into your IDE using the root pom.xml file. In IntelliJ, choose Open Project from the Quick Start box or choose Open from the File menu and select the root pom.xml file.

After opening the project in IntelliJ, double check that the Java SDK is properly configured for the project:

  • Open the File menu and select Project Structure
  • In the SDKs section, ensure that a 1.8 JDK is selected (create one if none exist)
  • In the Project section, ensure the Project language level is set to 8.0 as Presto makes use of several Java 8 language features

Presto comes with sample configuration that should work out-of-the-box for development. Use the following options to create a run configuration:

  • Main Class: com.facebook.presto.server.PrestoServer
  • VM Options: -ea -XX:+UseG1GC -XX:G1HeapRegionSize=32M -XX:+UseGCOverheadLimit -XX:+ExplicitGCInvokesConcurrent -Xmx2G -Dconfig=etc/ -Dlog.levels-file=etc/
  • Working directory: $MODULE_DIR$
  • Use classpath of module: presto-main

The working directory should be the presto-main subdirectory. In IntelliJ, using $MODULE_DIR$ accomplishes this automatically.

Additionally, the Hive plugin must be configured with location of your Hive metastore Thrift service. Add the following to the list of VM options, replacing localhost:9083 with the correct host and port (or use the below value if you do not have a Hive metastore):


Using SOCKS for Hive or HDFS

If your Hive metastore or HDFS cluster is not directly accessible to your local machine, you can use SSH port forwarding to access it. Setup a dynamic SOCKS proxy with SSH listening on local port 1080:

ssh -v -N -D 1080 server

Then add the following to the list of VM options:


Running the CLI

Start the CLI to connect to the server and run SQL queries:


Run a query to see the nodes in the cluster:

SELECT * FROM system.runtime.nodes;

In the sample configuration, the Hive connector is mounted in the hive catalog, so you can run the following queries to show the tables in the Hive database default:

SHOW TABLES FROM hive.default;


We recommend you use IntelliJ as your IDE. The code style template for the project can be found in the codestyle repository along with our general programming and Java guidelines. In addition to those you should also adhere to the following:

  • Alphabetize sections in the documentation source files (both in table of contents files and other regular documentation files). In general, alphabetize methods/variables/sections if such ordering already exists in the surrounding code.
  • When appropriate, use the Java 8 stream API. However, note that the stream implementation does not perform well so avoid using it in inner loops or otherwise performance sensitive sections.
  • Categorize errors when throwing exceptions. For example, PrestoException takes an error code as an argument, PrestoException(HIVE_TOO_MANY_OPEN_PARTITIONS). This categorization lets you generate reports so you can monitor the frequency of various failures.
  • Ensure that all files have the appropriate license header; you can generate the license by running mvn license:format.
  • Consider using String formatting (printf style formatting using the Java Formatter class): format("Session property %s is invalid: %s", name, value) (note that format() should always be statically imported). Sometimes, if you only need to append something, consider using the + operator.
  • Avoid using the ternary operator except for trivial expressions.
  • Use an assertion from Airlift's Assertions class if there is one that covers your case rather than writing the assertion by hand. Over time we may move over to more fluent assertions like AssertJ.
  • When writing a Git commit message, follow these guidelines.
presto open issues Ask a question     (View All Issues)
  • over 2 years Predicate pushdown for timestamp columns
  • over 2 years How to add connectors to presto on Amazon EMR ?
  • over 2 years Query 20161110_121745_00074_ru8sf failed: line 1:15: Catalog mysqlcatalog does not exist
  • over 2 years Explain analyze incorrectly strips off output columns
  • over 2 years Add column nullable stats per shard in Raptor
  • over 2 years Support Unicode escaped strings
  • over 2 years Categorize IOException when reading Hive table
  • over 2 years Display better error messages for unsupported correlated query shapes
  • over 2 years Query plans not deterministic
  • over 2 years Maven checks Travis job fails nondeterministiacally
  • over 2 years Add hard memory limit to resource groups
  • over 2 years Truncate long principals on UI query details page
  • over 2 years Add header in UI to live plan with link back to query details page
  • over 2 years Add support for DISTINCT in selective aggregates
  • over 2 years How to Delete a partition file in Amazon S3 using a Presto script?
  • over 2 years testResourceGroupInfo - random test fail
  • over 2 years Missing schema permission implementations
  • over 2 years Uncategorized error for from_unixtime
  • over 2 years Add failureHost and failureTask to QueryInfo
  • over 2 years Do not send INFORMATION_SCHEMA security checks to connectors
  • over 2 years Inconsistency in the way the TPCH connector handles schema names
  • over 2 years Add configurable limit on Hive partitions read
  • over 2 years Properly handle nulls in Raptor temporal and sort columns
  • over 2 years Support =ALL and <>ANY for unorderable types
  • over 2 years Support INSERT for Cassandra connector
  • over 2 years Handle type mismatch between partition and file in RCFile
  • over 2 years Add support for database schema evolution in Raptor
  • over 2 years Categorize "Unsupported correlated subquery type" exception
  • over 2 years Expression in ON clause not removed when implicit cast inserted in comparison
  • over 2 years Hash partition on GroupId in aggregation exchanges
presto open pull requests (View All Pulls)
  • Enforce minimum file descriptor limit on startup
  • Add loading indicator to query detail page
  • Document how to enable Kerberos on the Presto CLI/coordinator.
  • Remove references to TaskInfo from StatementResource
  • Add width bucket implementation for array bin specification
  • Apply non-TupleDomain predicates to Hive partition list
  • Improve QueryBuilder to support VARCHAR condition
  • Support multiple shard UUIDs in ShardPredicate
  • Use actual read bytes for ParquetDataSource
  • Parse each type string to type signature only once in QueryResults
  • Array less than or equal migration
  • Fix product tests resource processing
  • Parquet reads data in Slice and use pure java compression
  • Add support for scalar subqueries in delete queries
  • Fix missing page memory tracking at parallel build
  • Decimal v7
  • Migrate array_equal operator to new scalar framework
  • Migrate map_equal operator to new scalar framework
  • Add limited support for non-equi outer joins
  • Validate chunkLength for OrcInputStream
  • Don't return taskInfo when task is deleted
  • Migrate array_hash_code operator to new scalar framework
  • Migrate array_greater_than_or_equal operator to new scalar framework
  • Support VARCHAR(x) columns in Hive Connector
  • Fix `` property usage.
  • Fix dictionary fallback detection.
  • Migrate array_sort() to new scalar framework
  • Add support for inserting nulls
  • Provide more context for error messages
  • Migrate array_intersect() to new scalar framework
  • Remove the remaining VARCHAR in integration test
  • Add maven plugin profiling extension
  • Use shorter factory names for Varchar and Decimal
  • Refactor TypeRegistry
  • Fix cli crash when trying to use `extract` with an invalid field
  • Add parsing support for revoke and implementation for hive connector
  • Support for dynamically detect new catalogs
  • Migrate cardinality, contains and array_position to new scalar framework
  • PoC of spilling to disk for aggregation
  • Use varchar(limit) in raptor connector
  • [WIP] Add map literal syntax
  • Add a session property to specify number of source node candidates
  • Add beginSelect notification for Connector SPI
  • Changes to the new Parquet reader
  • Overhaul Web UI
  • Support INTERSECT
  • Non equality predicates in outer join (v3)
  • Add local file connector
  • Jmx history v2
  • Add ROW constructor
  • Add source command to CLI
  • WIP Very preliminary work in progress for presto query queues
  • Add Apache Accumulo connector and documentation
  • Use precision parameter for VARCHAR columns in tpch connector
  • Fix query cancellation on shutdown
  • New output buffer implementation
  • Enable ORC stripe prunning based on the DECIMAL predicates
  • Clean up row field reference implementation
  • Add basic tuning information using properties to presto-doc.
  • Migrate from Guava's cache to Caffeine
  • Table identity
  • add more logs in spnegofilter
  • Add support for prepared statements to the cli
  • Add parametrized trim functions
  • Add bucket number as hidden column to Raptor tables
  • [WIP] Add INDETERMINATE operator to detect nulls and use it in SemiJoin
  • Support for regex functions using re2j-td in Presto
  • MySQL and PostgreSQL connector tests
  • Introduce scope to semantic analyzer and update correlation in Apply node
  • Feature explain analyze
  • Fail request for ACKed pages in SharedBuffer
  • Update to Kafka client
  • The partition/table schema compatibility check in HiveSplitManager is not working as expected for external tables.
  • Float type - first part
  • Introduce additional coercions for DECIMAL
  • Materialized query table
  • [WIP] Support row types for IN expressions
  • [WIP] Add support for GROUPING()
  • WIP Resource groups
  • Document how to enable Kerberos on the Presto CLI/coordinator.
  • Change subscript to throw on missing key
  • Small optimizations for analysis and planning
  • Add `unix_timestamp([dateString [, format]])`
  • Use varchar(limit) in raptor connector (v3)
  • Add JMX counter for cpu time in shard compaction
  • Support for parameters in prepared statements
  • Implemented setMaxRows and setLargeMaxRows in JDBC driver
  • Introduce SymbolReference expression node
  • Introduce parameter in JDBC URL for connection via SSL
  • Remove unnecessary null checks
  • Check the field count consistency for rows in VALUES clause.
  • Free DriverContext memory at failure
  • Split
  • Allow empty string as delimiter for `split` + fixed Regexp_split
  • Use primitive arrays in blocks to improve performance
  • Add support for expressions in CUBE/ROLLUP/GROUPING SETS
  • Fix nonequality join predicate pushdown
  • Add port to 'Host' column for the tasks information in the query.html
  • Fix query rejection to fail query
  • Reenable bucket writing and execution in Hive with bug fixes
  • Add type matching checker for PlanSanityChecker
  • Log shard when raptor backup times out
  • Do not store node assignments for bucketed shards
  • Float type - first part continuation
  • Allow CONCAT(varchar(x), ...) to return limited varchar type
  • Refactor docker compose
  • Decimal as default type for fixed point literals
  • Add support for varchar(x) in aggregate functions
  • Add support of varchar(x) to presto scalar functions and operators
  • DateTime types implicit conversion from varchar
  • Decimal functions
  • Refactor of parsing scalar functions annotations code
  • Add event listener plugin
  • Add support for non correlated EXISTS subquery
  • Remove unavailable partitioning preferences from GroupId
  • Document Kerberos auth should be enabled with Kerberized Hive.
  • Feature char v2
  • Feature explain analyze v2
  • Record presto version when creating a table/partition in Hive connector
  • Add FLOAT coercions
  • Capture partition count in Plan
  • Refactor shard compaction in Raptor
  • Resource groups m0s
  • Make presto-mongodb tests actually parallel
  • Add CPU limits to resource groups
  • Add support for GROUPING()
  • Merge WindowNodes with identical specifications
  • Prevent modifications to non-managed Hive tables (v2)
  • Add shard organization in Raptor
  • Add detailed error message for one of CLI problems
  • Add geo spatial functions to Presto
  • LDAP authentication support
  • Support subqueries in non INNER join
  • Add method to check if a node is the coordinator
  • Move static aggregation function instances to corresponding test and benchmark classes
  • Use redline-td and remove old-rpm profile
  • Upgrade to Sphinx 1.4
  • Add additional test for TypedSet
  • Cosine Similarity UDF function
  • Update product-tests to work with Simba's JDBC driver
  • Extract query rewrites from StatementAnalyzer
  • Add FromLiteralParamter as ScalarImplementation dependency
  • Support GROUP BY alias of the expression in the SELECT list.
  • Replace Plugin injection with setters
  • Introduce CardinalityExctractor
  • Extended support for ppc64le
  • Convert more Hive plugin to ExtendedHiveMetastore
  • JDBC Connector fix column sizes of types
  • Improve documentation for MySQL date time functions
  • Verifier shadow
  • Filter non-existing shards before organizing them
  • Add hidden $PATH column to Hive connector
  • Document GRANT and REVOKE
  • Add presto-bechto service
  • Prune Nested Fields for Parquet Columns
  • support openjdk for rpm.
  • Load schema in Tableau based on initial catalog
  • Fix SQL injection in Raptor ShardMetadataRecordCursor
  • Fix: MongoDB ObjectId handling in queries
  • Add port to 'Host' column for the tasks information in the query.html.
  • Upgrade JMH to 1.14.1
  • Fix comparison for CHAR to take into account padding
  • Use columnar processing in projection operator without filter
  • Update docs about subqueries
  • Fix malformed JSON in documentation of hive security
  • Add array_diff operator for arrays (set difference)
  • Added support for Materialized Views on postgresql
  • Make jmx.history configuration case insensitive
  • Fix deadlock in ContinuousTaskStatusFetcher#updateTaskStatus
  • Make RaptorPageSink#finish() async
  • Send serialized PrestoException as response
  • Lambda support
  • Periodically cleanup old completed raptor transactions
  • Fix shard recovery manager random interval
  • Fix duplicated TRY method generation
  • Add IGNORE NULLS clause to LAG/LEAD functions
  • [WIP] Change toShardIndexInfo to skip shards with null min/max
  • WIP Add frame info in Window functions in Explain output
  • Extend transactions to cover catalog name to connector instance
  • Add support for getting table names with prefix
  • Add beginQuery and endQuery SPI notifications
  • Handle existing privileges for grant and revoke
  • Add bucket balancer to Raptor
  • IS_DISTINCT_FROM operator.
  • Fix error message in group by query
  • [WIP] HBase connector
  • Add approximate most frequent aggregation
  • Fix view creation when table name contains quotes
  • Add new RcFile writer
  • use relative URI path from nextUri with host:port based on session server URI in client
  • Remove duplicate check in ValidateDependenciesChecker
  • Add support for reloadable FileResourceGroupConfigurationManager
  • Improve performance of default configs
  • Fix #6223 remove effectivePredicate from HiveSplit.getInfo()
  • Fix Redis distributed tests
  • Optimize scheduler
  • Add ability for event listeners to get connector-specific output metadata
  • Remove duplicate H2ResourceGroupDao and provider, fixes #6473
  • Add a variant for from_unixtime function
  • Improve TransactionMetadata thread safty
  • Hive statistics + POC usage (v3)
  • Feature prune join output
  • Enable TestHiveIntegrationSmokeTest to test ParquetPageSource
  • Projection in split source manager
  • Rename MySQL and PostgreSQL product-tests files
  • Make PredicatePushdown optimizer not create unnecessary symbols
  • Cleanup CatalogSchemaTableName
  • Increase -Xmx for Maven in .travis.yml
  • [WIP] Add function to to shorten and format numbers
  • Move page dictionary compaction logic to Page
  • Add queryIds and cpuUsage to ResourceGroupInfo
  • Support hash of array and row with nulls
  • Support scalar table functions (double unnest of ARRAY<ROW>)
  • Desugar the expressions before evaluating the constant values
  • Expose Raptor cross shard organization as a table property
  • Test dependencies commit
  • Support Row Type in New Parquet Reader
  • Add TLS support to the presto-jdbc driver
  • Implement FILTER clause for aggregations
  • Change Array element_at to return null when an index is not found
  • Include class name in exceptions for AbstractType
  • Change session identity to be non-null
  • Create failed query when session is invalid
  • Fix Mongo case-sensitive schema
  • Pass client-supplied payload field to EventListener
  • Validate that a plan has at most one OutputNode
  • Make Plan DSL independent of SymbolAllocator behavior
  • Add several improvements to the Accumulo connector
  • Prune unreferenced Apply nodes
  • Add TPC-DS queries as product tests
  • Fix warnings in Decimals and DecimalCasts
  • Change ArrayBlock to use arrays internally
  • Add support for "show grants"
  • Refactor in Annotation based functions
  • Remove DataDefinitionStatement base class from AST
  • Support Map and Array Type in New Parquet Reader
  • Presto base nosql
  • Support for coercions in scalar subquery and IN subquery
  • Add catalogs column to the nodes table
  • Add memory tracking to the new Parquet reader
  • Add access control for SHOW and listings
  • Fix kerberos product tests
  • Detect recursive view and throw SemanticException
  • Add FileHiveMetastore
  • Add split_to_map overload with keys to filter
  • Feature prune join output
  • Resolves #6550: Uncategorized error for from_unixtime
  • Allow the hadoop version and zookeeper version to be overriden
  • Add support for GROUPING()
  • Fix presto-tests intermittent OOM errors
  • Fix passing arguments to product tests runner
  • Feature refactor apply
  • [WIP] Iterative optimizer
  • Issue #6581 : Categorize IOException as HIVE_BAD_DATA
  • Detect recursive view and throw SemanticException
  • Make DiscretePredicates support lazy generation
  • Add cache size limit to Hive metastore cache
  • Fix handling of role names for grant/revoke
  • Support https communication between nodes
  • Hive statistics + POC usage (v4)
  • Remove query creation from /v1/query
  • Remove /v1/execute resource
  • Nested Column Pruning for Parquet
  • begin/cleanup query notifications
  • Add VARBINARY concatenation support
  • [WIP] Rewrite the lambda execution
  • Allow raptor shards table be scaned faster
  • Use tableScan for registered connector in TestRemoveEmptyDelete
  • Disallow invalid value for task_writer_count
  • Litany of UI fixes and improvements
  • Supply namespaced MBeanServer to plugins
  • Add missing backticks for proper formatting
  • Add object overhead to estimated memory size in various state classes
  • Dynamic filtering support for inner joins
  • Make CLI show instantaneous byte/row rates rather than average from beginning.
  • Change build dependency from CDH4 to Hadoop 1.x
  • Spill in explain analyze, query summary and web UI
  • Fix validation of floating point values in verifier
  • Improve message for invalid lambda parameter count
  • Wrap remote exception in SimpleHttpResponseHandler
  • [WIP] Migrate existing reader/function to produce new map block
  • Allow the REST /statement/ response to be of a configurable target size
  • Add stats to block size read from Orc file
  • Change heuristic-based scheduling to a 2-phase one for exchange client
  • Add missing scalar to JSON casts
  • Rule tests with Lookup
  • Improve documentation for parse_duration
  • Remove identity-based collections from analysis
  • Add preprocessor support to CLI
  • Make it possible to choose regex library on a per-query basis
  • Add lookup join operator statistics
  • Fix creating empty tables in presto-memory
  • Improve local scheduler fairness
  • Check table and view name conflict in raptor
  • Add set_ugi method to Hive Metastore
  • Add SignatureBinder test involving function<..., varchar>
  • Remove reorder_joins parameter from benchmarks
  • Make TPCH to support predicate pushdown form PART.container and type and apply Layout Constarint.predicate
  • Lazy load buffer for large ORC streams
  • Add a test that Presto works without iterative optimizer
  • Create distributed plans in BenchmarkPlanner
  • Support lambda function for regex replacement
  • Introduce OrderingScheme
  • Composable Stats Calculator (phase 1)
  • Move LDAP authentication to password authenticator plugin
  • Fix peak memory in query details UI
  • Distributed merge sort
  • Remove LocalExchangeMemoryManager#setNoBlockOnFull
  • Approximate object references for ReferenceCountMap
  • Short-circuit inner and right join when right side is empty
  • Make local exchanger responsible for blocking writes
  • PartitioningExchanger doesn't have to synchronize on accept
  • Fix for: HiveMetastore outputFormat should not be accessed from a null StorageFormat (#6972)
  • Add documentation for Hivemetastore settings
  • Updating docs for verifier
  • Suppress FieldAccessNotGuarded warnings in InternalResourceGroup
  • Remove predicate pushdown switch in TPCH connector (expose and test for #9800 bug)
  • Allow multiple LDAP user bind patterns in config
  • Fix: Query hangs when partitions are offline for retention
  • Final cleanup of orphan memory reservation
  • Update tests for simba JDBC driver
  • CBO preview mode
  • Pattern matching refactor and removal of PlanNodeMatcher
  • Refactor getQueryMaxMemory session property getter
  • Fix query peak user/total memory tracking
  • remove repeated hdfs fopen calls in ParquetPageSource
  • Add cumulative schema to JMX connector
  • Use type.appendTo() in Slice/ObjectBlockPositionState
  • Remove false limitations in cassandra docs
  • Add support for casts in InPredicate to TupleDomain
  • Track GC count and time in TaskStatus
  • Fix parquet predicate pushdown type mismatch bug
  • Add file path to RcFilePageSource exceptions
  • Disable optimize-mixed-distinct-aggregations in tests by default
  • Extract aggregation tests to a separate class
  • [Work in Progress, Test Only] Optimize min/max with Object/SliceBlockPositionState
  • Support annotation based specialized aggregates implementations
  • Represent TIMESTAMP W/TZ as ZonedDateTime in JDBC
  • User principal matching implementation into the file access control.
  • Introduce FunctionReference (an FunctionCall alternative for planner) and use it in AggregationNode
  • [Work In Progress, Test Only] Optimize Block Copy for VARCHAR and Structural Types
  • Polish: replace this lambda with a method reference.
  • Polish: use try-with-resource
  • Add support for Glue Hive metastore
  • Introduce test selection by tested feature
  • Add support for trace token in Thrift connector
  • User principal matching
  • Add authorization support for show columns
  • Parse decimal literals as DECIMAL by default
  • Add support for escape sequences in LIKE pattern of SHOW SCHEMAS and SHOW TABLES
  • Change JVM time zone in tests to better test corner cases
  • Minor improvements in SingleBlock for Array/Map/Row
  • Add docs to support Presto use system idle port
  • Add ST_IsValid and geometry_is_valid_reason functions
  • Fix current_time timezone offset
  • Remove system pool
  • Fix username extraction sample json from built-in sys access control docs
  • Add killQuery interface to QueryManager
  • Allow thrift connector to register/pass customized session property
  • Allow including a comment when adding a column to a table
  • Add option to drop ORC string stats if exceeding limit
  • Added support for skipping of stripe based on TimestampStatistics
  • Current user fn
  • couldn't create table when using viewfs
  • Add csv output option without quotes
  • Add trace token support to scheduler and exchange HTTP clients
  • Histogram remove inner classes
  • Update to Airbase 80
  • [WIP] Use smallest possible type for LongLiterals
  • Add sequence variant for DATE
  • Push SemiJoin predicate inferred from filter side to source side (v2)
  • Disable log function with config flag
  • [WIP, Prototype for Test] Refactor and Improve structural type copy for Block
  • pushdown dereference expression
  • Various ExpressionInterpreter cleanups
  • Change ROUND_N so it accepts N being INTEGER rather than BIGINT
  • Optimize min/max aggregation with BlockPositionState
  • Various cleanups for row type signature handling
  • support default timestamp format in request log
  • [WIP] Add resourceGroupId to SessionConfigurationContext
  • Run Hive S3 tests on Travis
  • [prototype, WIP] Scale to larger clusters (Part 1)
  • Remove --enable-authentication option
  • Use explain plan in QueryCompletedEvent if possible
  • Reformat code of EqualityInference class
  • Use better pattern in PushPartialAggregation rules
  • Add peak per-node system memory usage to QueryStats
  • Alternative execution strategy for multiple DISTINCT aggregates
  • Move ScalarAggregationToJoinRewriter to TransformCorrelatedScalarAggregationToJoin
  • Remove dead code in SqlQueryManager
  • Improve local aggregation parallelism
  • [WIP] Do not use count(*) when rewriting exists from apply to lateral node
  • Fix ordinal_position when adding new column
  • TIME/TIMESTAMP W/O TIME ZONE semantics fix - continuation (v3)
  • Move testSelectAllDatatypesAvro to big_query group
  • Add support for DATE predicate pushdown with Parquet via min/max and …
  • [WIP] Nan is not distinct from Nan
  • Support grouped execution of aggregation
  • Fix wrong error message for UNION type mismatch
  • Add an array_sort function that takes a lambda comparator
  • Support Nested Schema Evolution in Parquet for Presto #6675. Copy of …
  • Warnings System
  • Rewrite Driver cpu timing to estimate real cpu usage
  • Optimize array_agg with flattened group state
  • Add shard operation events in Raptor
presto questions on Stackoverflow (View All Questions)
  • Rest Service on top of Presto
  • How to add connectors to presto on Amazon EMR
  • Cross reference list or records against eachother in presto
  • FileAlreadyExistsException occurred when I was exporting data from Presto with Hive engine on Amazon EMR
  • Presto Interpreter in Zeppelin on EMR
  • Presto unnest json
  • Presto on EMR - setting the environment variable
  • How to optimize my presto slow query?
  • Facebook Presto unable to retrieve data from Azure Blob Storage
  • Presto - Oracle and Mongodb join query
  • Presto for cassandra
  • How many users does Presto DB support?
  • Presto / PrestoDB - Query ... No worker nodes available
  • Presto query sometimes returns 0 rows and sometimes returns some rows
  • Is there a way to use Facebook Presto 0.131 with Cassandra 3.0.0?
  • Presto and hive partition discovery
  • Does Presto have the equivalent of Hive's SET command
  • How to list all Presto workers?
  • timestamp field in presto parquet table showing bad data
  • Temporary Table SQL Presto
  • Generate interval from variable in Presto
  • Using Presto on Cloud Dataproc with Google Cloud SQL?
  • How can I run Presto on Google Cloud Dataproc?
  • Error running presto query on kinesis
  • File formats supported by Presto
  • Transforming dataset from text file format to "presto-orc" format for better prestoDB performance
  • UNION ALL / UNION on Presto
  • Does Presto support HDP2 High Availability configuration?
  • How to extract keys in a nested json array object in Presto?
  • Maven: missing artifact presto
presto list of languages used
More projects by prestodb View all
Other projects in Java