Index not being used in SELECT queryWould index lookup be noticeably faster with char vs varchar when all...

Why don't hotels mount air conditioning units outside the rooms?

What's a good word to describe a public place that looks like it wouldn't be rough?

Making him into a bully (how to show mild violence)

How does Leonard in "Memento" remember reading and writing?

What is the wife of a henpecked husband called?

Mathematics and the art of linearizing the circle

Can I make estimated tax payments instead of withholding from my paycheck?

How do I append a character to the end of every line in an Excel cell?

Can we harness gravitational potential energy?

Why is working on the same position for more than 15 years not a red flag?

Graph with overlapping labels

Why was Lupin comfortable with saying Voldemort's name?

Is there a weight limit to Feather Fall?

How can I get my players to come to the game session after agreeing to a date?

Why do neural networks need so many training examples to perform?

Do theoretical physics suggest that gravity is the exchange of gravitons or deformation/bending of spacetime?

using 'echo' & 'printf' in bash function calls

What is the difference between rolling more dice versus fewer dice?

What is the standard way to address written statements presented as evidence in a civil case?

Is it possible to grant users sftp access without shell access? If yes, how is it implemented?

Citing paywalled articles accessed via illegal web sharing

Why are the books in the Game of Thrones citadel library shelved spine inwards?

What would be the rarity of this magic item(s)?

How can a school be getting an epidemic of whooping cough if most of the students are vaccinated?



Index not being used in SELECT query


Would index lookup be noticeably faster with char vs varchar when all values are 36 charsHow can I speed up a Postgres query containing lots of Joins with an ILIKE conditionpostgres explain plan with giant gaps between operationsSlow fulltext search due to wildly inaccurate row estimatesIndex for numeric field is not usedSorting killing my postgresql queryWhy is this query with WHERE, ORDER BY and LIMIT so slow?How to index two tables for JOINed query optimisationPostgreSQL 9.5 query performance depends on JOINed column in SELECT clausePerformance difference in accessing differrent columns in a Postgres Table













5















I have a table of about 3.25M rows with the follow format in Postgres 9.4.1



CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);


The type is a simple string no longer than 50 characters.



The references column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be



{
"fruit": "plum"
"car": "toyota"
}


or it could be



{
"project": "2532"
}


The createdAt timestamp is not always generated from the database (but it will by default if a value isn't supplied)



I'm currently using the table with only testing data. In this data every row has a project key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project reference. There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.



So I'm trying to index the table to perform the following query quickly:



SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'


The query returns the time distance between two events based on two stats rows that have the same reference. In this case project. There is only ever 1 row for each type and selected reference value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).



I've created an index on the createdAt type and references columns but the query execution plan appears to be doing a full scan instead.



The index



CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");


Execution plan:



 HashAggregate  (cost=111188.31..111188.33 rows=1 width=38) 
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))

Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms


I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.



Edit



As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.










share|improve this question















migrated from stackoverflow.com Jan 1 '16 at 12:59


This question came from our site for professional and enthusiast programmers.



















  • Did you try an index on (type, created_at, references) ?

    – ypercubeᵀᴹ
    Jan 1 '16 at 13:14











  • @YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.

    – James Hay
    Jan 1 '16 at 13:19











  • No that one slowed it down to 26 seconds unfortunately :/

    – James Hay
    Jan 1 '16 at 13:32











  • Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:? WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')

    – ypercubeᵀᴹ
    Jan 1 '16 at 14:06













  • Please provide your actual table definition showing exact data types and all constraints: a valid CREATE TABLE script or what you get with d stats in psql. And always your version of Postgres. How many rows with a key references ->> 'project' and how many distinct values for references ->> 'project'?

    – Erwin Brandstetter
    Jan 1 '16 at 16:42
















5















I have a table of about 3.25M rows with the follow format in Postgres 9.4.1



CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);


The type is a simple string no longer than 50 characters.



The references column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be



{
"fruit": "plum"
"car": "toyota"
}


or it could be



{
"project": "2532"
}


The createdAt timestamp is not always generated from the database (but it will by default if a value isn't supplied)



I'm currently using the table with only testing data. In this data every row has a project key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project reference. There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.



So I'm trying to index the table to perform the following query quickly:



SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'


The query returns the time distance between two events based on two stats rows that have the same reference. In this case project. There is only ever 1 row for each type and selected reference value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).



I've created an index on the createdAt type and references columns but the query execution plan appears to be doing a full scan instead.



The index



CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");


Execution plan:



 HashAggregate  (cost=111188.31..111188.33 rows=1 width=38) 
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))

Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms


I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.



Edit



As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.










share|improve this question















migrated from stackoverflow.com Jan 1 '16 at 12:59


This question came from our site for professional and enthusiast programmers.



















  • Did you try an index on (type, created_at, references) ?

    – ypercubeᵀᴹ
    Jan 1 '16 at 13:14











  • @YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.

    – James Hay
    Jan 1 '16 at 13:19











  • No that one slowed it down to 26 seconds unfortunately :/

    – James Hay
    Jan 1 '16 at 13:32











  • Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:? WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')

    – ypercubeᵀᴹ
    Jan 1 '16 at 14:06













  • Please provide your actual table definition showing exact data types and all constraints: a valid CREATE TABLE script or what you get with d stats in psql. And always your version of Postgres. How many rows with a key references ->> 'project' and how many distinct values for references ->> 'project'?

    – Erwin Brandstetter
    Jan 1 '16 at 16:42














5












5








5


1






I have a table of about 3.25M rows with the follow format in Postgres 9.4.1



CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);


The type is a simple string no longer than 50 characters.



The references column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be



{
"fruit": "plum"
"car": "toyota"
}


or it could be



{
"project": "2532"
}


The createdAt timestamp is not always generated from the database (but it will by default if a value isn't supplied)



I'm currently using the table with only testing data. In this data every row has a project key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project reference. There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.



So I'm trying to index the table to perform the following query quickly:



SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'


The query returns the time distance between two events based on two stats rows that have the same reference. In this case project. There is only ever 1 row for each type and selected reference value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).



I've created an index on the createdAt type and references columns but the query execution plan appears to be doing a full scan instead.



The index



CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");


Execution plan:



 HashAggregate  (cost=111188.31..111188.33 rows=1 width=38) 
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))

Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms


I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.



Edit



As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.










share|improve this question
















I have a table of about 3.25M rows with the follow format in Postgres 9.4.1



CREATE TABLE stats
(
id serial NOT NULL,
type character varying(255) NOT NULL,
"references" jsonb NOT NULL,
path jsonb,
data jsonb,
"createdAt" timestamp with time zone NOT NULL,
CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);


The type is a simple string no longer than 50 characters.



The references column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be



{
"fruit": "plum"
"car": "toyota"
}


or it could be



{
"project": "2532"
}


The createdAt timestamp is not always generated from the database (but it will by default if a value isn't supplied)



I'm currently using the table with only testing data. In this data every row has a project key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project reference. There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.



So I'm trying to index the table to perform the following query quickly:



SELECT
EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM
stats
WHERE
stats."references"::jsonb ? 'project' AND
(
stats."type" = 'event1' OR
(
stats."type" = 'event2' AND
stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
stats."createdAt" < '2015-12-03T23:59:59+08:00'
)
)
GROUP BY stats."references"::jsonb->> 'project'


The query returns the time distance between two events based on two stats rows that have the same reference. In this case project. There is only ever 1 row for each type and selected reference value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).



I've created an index on the createdAt type and references columns but the query execution plan appears to be doing a full scan instead.



The index



CREATE INDEX "stats_createdAt_references_type_idx"
ON stats
USING btree
("createdAt", "references", type COLLATE pg_catalog."default");


Execution plan:



 HashAggregate  (cost=111188.31..111188.33 rows=1 width=38) 
(actual time=714.499..714.499 rows=0 loops=1)
Group Key: ("references" ->> 'project'::text)
-> Seq Scan on stats (cost=0.00..111188.30 rows=1 width=38)
(actual time=714.498..714.498 rows=0 loops=1)
Filter: (
(("references" ? 'project'::text)
AND ((type)::text = 'event1'::text)) OR
(((type)::text = 'event2'::text)
AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone)
AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))

Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms


I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.



Edit



As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.







postgresql database-design index postgresql-performance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 2 '16 at 5:20







James Hay

















asked Jan 1 '16 at 8:48









James HayJames Hay

1504




1504




migrated from stackoverflow.com Jan 1 '16 at 12:59


This question came from our site for professional and enthusiast programmers.









migrated from stackoverflow.com Jan 1 '16 at 12:59


This question came from our site for professional and enthusiast programmers.















  • Did you try an index on (type, created_at, references) ?

    – ypercubeᵀᴹ
    Jan 1 '16 at 13:14











  • @YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.

    – James Hay
    Jan 1 '16 at 13:19











  • No that one slowed it down to 26 seconds unfortunately :/

    – James Hay
    Jan 1 '16 at 13:32











  • Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:? WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')

    – ypercubeᵀᴹ
    Jan 1 '16 at 14:06













  • Please provide your actual table definition showing exact data types and all constraints: a valid CREATE TABLE script or what you get with d stats in psql. And always your version of Postgres. How many rows with a key references ->> 'project' and how many distinct values for references ->> 'project'?

    – Erwin Brandstetter
    Jan 1 '16 at 16:42



















  • Did you try an index on (type, created_at, references) ?

    – ypercubeᵀᴹ
    Jan 1 '16 at 13:14











  • @YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.

    – James Hay
    Jan 1 '16 at 13:19











  • No that one slowed it down to 26 seconds unfortunately :/

    – James Hay
    Jan 1 '16 at 13:32











  • Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:? WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')

    – ypercubeᵀᴹ
    Jan 1 '16 at 14:06













  • Please provide your actual table definition showing exact data types and all constraints: a valid CREATE TABLE script or what you get with d stats in psql. And always your version of Postgres. How many rows with a key references ->> 'project' and how many distinct values for references ->> 'project'?

    – Erwin Brandstetter
    Jan 1 '16 at 16:42

















Did you try an index on (type, created_at, references) ?

– ypercubeᵀᴹ
Jan 1 '16 at 13:14





Did you try an index on (type, created_at, references) ?

– ypercubeᵀᴹ
Jan 1 '16 at 13:14













@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.

– James Hay
Jan 1 '16 at 13:19





@YperSillyCubeᵀᴹ I'm just going through now and trying out some indexes so I'll try that one out next.

– James Hay
Jan 1 '16 at 13:19













No that one slowed it down to 26 seconds unfortunately :/

– James Hay
Jan 1 '16 at 13:32





No that one slowed it down to 26 seconds unfortunately :/

– James Hay
Jan 1 '16 at 13:32













Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:? WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')

– ypercubeᵀᴹ
Jan 1 '16 at 14:06







Oh, I misread the WHERE condition. Are you aware that the condition is equivalent to:? WHERE (stats."references"::jsonb ? 'project' AND stats."type" = 'event1') OR (stats."type" = 'event2' AND stats."createdAt" > '2015-11-02T00:00:00+08:00' AND stats."createdAt" < '2015-12-03T23:59:59+08:00')

– ypercubeᵀᴹ
Jan 1 '16 at 14:06















Please provide your actual table definition showing exact data types and all constraints: a valid CREATE TABLE script or what you get with d stats in psql. And always your version of Postgres. How many rows with a key references ->> 'project' and how many distinct values for references ->> 'project'?

– Erwin Brandstetter
Jan 1 '16 at 16:42





Please provide your actual table definition showing exact data types and all constraints: a valid CREATE TABLE script or what you get with d stats in psql. And always your version of Postgres. How many rows with a key references ->> 'project' and how many distinct values for references ->> 'project'?

– Erwin Brandstetter
Jan 1 '16 at 16:42










1 Answer
1






active

oldest

votes


















3














According to your current explanation indexes are not going to help much (if at all) with your current query.




So there are 3.25M rows with a project key.




That's also the total number of rows, so this predicate is true for (almost) every row ... and not selective at all. But there is no useful index for the jsonb column "references". Including it in the btree index on ("createdAt", "references", type) is just pointless.



Even if you had a generally more useful GIN index on "reference" like:



CREATE INDEX stats_references_gix ON stats USING gin ("references");


... Postgres would still have no useful statistics about individual keys inside the jsonb column.






There are only 5 distinct values for the type




Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.



To test, you can force a possible index by setting for debugging purposes in your session:



SET enable_seqscan = off;


Reset with:



RESET enable_seqscan;


You'll see slower queries ...





You group by project values:




GROUP BY "references"->> 'project'




And:




There are exactly 400,000 distinct values for the project reference.




That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...



Try this index, it makes more sense than what you have now:



CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';


Postgres might still fall back to a sequential scan ...



More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt" ...



Query



I would write your query like this:



SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast


Notes





  • Don't cast here:



    stats."references"::jsonb ? 'project'



    The column is jsonb already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.



  • Your predicates on "createdAt" are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.


  • references is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt" either. Allowed, but error prone, needless complication.



  • type




    type character varying(255) NOT NULL,



    The type is a simple string no longer than 50 characters.



    There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.




    None of this seems to make sense.





    • varchar(255) in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.

    • If it's no longer than 50 chars, then the limit of 255 makes even less sense.

    • In a properly normalized design you would have a small integer column type_id (referencing a small type table) which occupies only 4 bytes per row and makes indexes smaller and faster.



  • Ideally, you would have a project table, listing all projects and another small integer FK column project_id in stats. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:


  • Optimize GROUP BY query to retrieve latest record per user







share|improve this answer


























  • Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

    – James Hay
    Jan 2 '16 at 5:10













  • Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

    – James Hay
    Jan 2 '16 at 5:35











  • @JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

    – Erwin Brandstetter
    Jan 2 '16 at 14:42













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f124982%2findex-not-being-used-in-select-query%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














According to your current explanation indexes are not going to help much (if at all) with your current query.




So there are 3.25M rows with a project key.




That's also the total number of rows, so this predicate is true for (almost) every row ... and not selective at all. But there is no useful index for the jsonb column "references". Including it in the btree index on ("createdAt", "references", type) is just pointless.



Even if you had a generally more useful GIN index on "reference" like:



CREATE INDEX stats_references_gix ON stats USING gin ("references");


... Postgres would still have no useful statistics about individual keys inside the jsonb column.






There are only 5 distinct values for the type




Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.



To test, you can force a possible index by setting for debugging purposes in your session:



SET enable_seqscan = off;


Reset with:



RESET enable_seqscan;


You'll see slower queries ...





You group by project values:




GROUP BY "references"->> 'project'




And:




There are exactly 400,000 distinct values for the project reference.




That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...



Try this index, it makes more sense than what you have now:



CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';


Postgres might still fall back to a sequential scan ...



More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt" ...



Query



I would write your query like this:



SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast


Notes





  • Don't cast here:



    stats."references"::jsonb ? 'project'



    The column is jsonb already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.



  • Your predicates on "createdAt" are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.


  • references is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt" either. Allowed, but error prone, needless complication.



  • type




    type character varying(255) NOT NULL,



    The type is a simple string no longer than 50 characters.



    There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.




    None of this seems to make sense.





    • varchar(255) in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.

    • If it's no longer than 50 chars, then the limit of 255 makes even less sense.

    • In a properly normalized design you would have a small integer column type_id (referencing a small type table) which occupies only 4 bytes per row and makes indexes smaller and faster.



  • Ideally, you would have a project table, listing all projects and another small integer FK column project_id in stats. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:


  • Optimize GROUP BY query to retrieve latest record per user







share|improve this answer


























  • Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

    – James Hay
    Jan 2 '16 at 5:10













  • Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

    – James Hay
    Jan 2 '16 at 5:35











  • @JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

    – Erwin Brandstetter
    Jan 2 '16 at 14:42


















3














According to your current explanation indexes are not going to help much (if at all) with your current query.




So there are 3.25M rows with a project key.




That's also the total number of rows, so this predicate is true for (almost) every row ... and not selective at all. But there is no useful index for the jsonb column "references". Including it in the btree index on ("createdAt", "references", type) is just pointless.



Even if you had a generally more useful GIN index on "reference" like:



CREATE INDEX stats_references_gix ON stats USING gin ("references");


... Postgres would still have no useful statistics about individual keys inside the jsonb column.






There are only 5 distinct values for the type




Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.



To test, you can force a possible index by setting for debugging purposes in your session:



SET enable_seqscan = off;


Reset with:



RESET enable_seqscan;


You'll see slower queries ...





You group by project values:




GROUP BY "references"->> 'project'




And:




There are exactly 400,000 distinct values for the project reference.




That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...



Try this index, it makes more sense than what you have now:



CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';


Postgres might still fall back to a sequential scan ...



More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt" ...



Query



I would write your query like this:



SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast


Notes





  • Don't cast here:



    stats."references"::jsonb ? 'project'



    The column is jsonb already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.



  • Your predicates on "createdAt" are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.


  • references is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt" either. Allowed, but error prone, needless complication.



  • type




    type character varying(255) NOT NULL,



    The type is a simple string no longer than 50 characters.



    There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.




    None of this seems to make sense.





    • varchar(255) in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.

    • If it's no longer than 50 chars, then the limit of 255 makes even less sense.

    • In a properly normalized design you would have a small integer column type_id (referencing a small type table) which occupies only 4 bytes per row and makes indexes smaller and faster.



  • Ideally, you would have a project table, listing all projects and another small integer FK column project_id in stats. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:


  • Optimize GROUP BY query to retrieve latest record per user







share|improve this answer


























  • Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

    – James Hay
    Jan 2 '16 at 5:10













  • Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

    – James Hay
    Jan 2 '16 at 5:35











  • @JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

    – Erwin Brandstetter
    Jan 2 '16 at 14:42
















3












3








3







According to your current explanation indexes are not going to help much (if at all) with your current query.




So there are 3.25M rows with a project key.




That's also the total number of rows, so this predicate is true for (almost) every row ... and not selective at all. But there is no useful index for the jsonb column "references". Including it in the btree index on ("createdAt", "references", type) is just pointless.



Even if you had a generally more useful GIN index on "reference" like:



CREATE INDEX stats_references_gix ON stats USING gin ("references");


... Postgres would still have no useful statistics about individual keys inside the jsonb column.






There are only 5 distinct values for the type




Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.



To test, you can force a possible index by setting for debugging purposes in your session:



SET enable_seqscan = off;


Reset with:



RESET enable_seqscan;


You'll see slower queries ...





You group by project values:




GROUP BY "references"->> 'project'




And:




There are exactly 400,000 distinct values for the project reference.




That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...



Try this index, it makes more sense than what you have now:



CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';


Postgres might still fall back to a sequential scan ...



More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt" ...



Query



I would write your query like this:



SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast


Notes





  • Don't cast here:



    stats."references"::jsonb ? 'project'



    The column is jsonb already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.



  • Your predicates on "createdAt" are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.


  • references is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt" either. Allowed, but error prone, needless complication.



  • type




    type character varying(255) NOT NULL,



    The type is a simple string no longer than 50 characters.



    There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.




    None of this seems to make sense.





    • varchar(255) in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.

    • If it's no longer than 50 chars, then the limit of 255 makes even less sense.

    • In a properly normalized design you would have a small integer column type_id (referencing a small type table) which occupies only 4 bytes per row and makes indexes smaller and faster.



  • Ideally, you would have a project table, listing all projects and another small integer FK column project_id in stats. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:


  • Optimize GROUP BY query to retrieve latest record per user







share|improve this answer















According to your current explanation indexes are not going to help much (if at all) with your current query.




So there are 3.25M rows with a project key.




That's also the total number of rows, so this predicate is true for (almost) every row ... and not selective at all. But there is no useful index for the jsonb column "references". Including it in the btree index on ("createdAt", "references", type) is just pointless.



Even if you had a generally more useful GIN index on "reference" like:



CREATE INDEX stats_references_gix ON stats USING gin ("references");


... Postgres would still have no useful statistics about individual keys inside the jsonb column.






There are only 5 distinct values for the type




Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.



To test, you can force a possible index by setting for debugging purposes in your session:



SET enable_seqscan = off;


Reset with:



RESET enable_seqscan;


You'll see slower queries ...





You group by project values:




GROUP BY "references"->> 'project'




And:




There are exactly 400,000 distinct values for the project reference.




That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...



Try this index, it makes more sense than what you have now:



CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';


Postgres might still fall back to a sequential scan ...



More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt" ...



Query



I would write your query like this:



SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast


Notes





  • Don't cast here:



    stats."references"::jsonb ? 'project'



    The column is jsonb already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.



  • Your predicates on "createdAt" are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.


  • references is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt" either. Allowed, but error prone, needless complication.



  • type




    type character varying(255) NOT NULL,



    The type is a simple string no longer than 50 characters.



    There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.




    None of this seems to make sense.





    • varchar(255) in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.

    • If it's no longer than 50 chars, then the limit of 255 makes even less sense.

    • In a properly normalized design you would have a small integer column type_id (referencing a small type table) which occupies only 4 bytes per row and makes indexes smaller and faster.



  • Ideally, you would have a project table, listing all projects and another small integer FK column project_id in stats. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:


  • Optimize GROUP BY query to retrieve latest record per user








share|improve this answer














share|improve this answer



share|improve this answer








edited 4 mins ago









Pang

1236




1236










answered Jan 2 '16 at 4:20









Erwin BrandstetterErwin Brandstetter

93.5k9180293




93.5k9180293













  • Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

    – James Hay
    Jan 2 '16 at 5:10













  • Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

    – James Hay
    Jan 2 '16 at 5:35











  • @JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

    – Erwin Brandstetter
    Jan 2 '16 at 14:42





















  • Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

    – James Hay
    Jan 2 '16 at 5:10













  • Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

    – James Hay
    Jan 2 '16 at 5:35











  • @JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

    – Erwin Brandstetter
    Jan 2 '16 at 14:42



















Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

– James Hay
Jan 2 '16 at 5:10







Thanks for the in depth answer. Noted on all of the above, naming conventions, date predicates, unecessary casting etc. I guess i've thrown myself off by using bad test data. On average the rows with project as a reference would be around 5 - 10 % of the table. I dont think the specialized index would work as the "references" dont actually point to anything else in the databse, theyre used purely as a way to tie rows together for various queries, and the key (project in this case) is user defined

– James Hay
Jan 2 '16 at 5:10















Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

– James Hay
Jan 2 '16 at 5:35





Also regarding varchar it was automatically generated by a model definition using sequelize. Ive previously only worked with SQL server so thats the reason I wasnt aware of that problem. I will see if i can update the schema to use text.

– James Hay
Jan 2 '16 at 5:35













@JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

– Erwin Brandstetter
Jan 2 '16 at 14:42







@JamesHay: There is no problem with varchar(n) per se. But unlike SQL Server, there is also no magic benefit for limiting to varchar(255). More: dba.stackexchange.com/a/89433/3684. Sequelize may be optimized for SQL Server ...

– Erwin Brandstetter
Jan 2 '16 at 14:42




















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f124982%2findex-not-being-used-in-select-query%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Anexo:Material bélico de la Fuerza Aérea de Chile Índice Aeronaves Defensa...

Always On Availability groups resolving state after failover - Remote harden of transaction...

update json value to null Announcing the arrival of Valued Associate #679: Cesar Manara ...