Finding optimal indices, different query competing indicesOptimizing ORDER BY in a full text search queryHow...
A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?
How is the relation "the smallest element is the same" reflexive?
How old can references or sources in a thesis be?
Copenhagen passport control - US citizen
Why don't electron-positron collisions release infinite energy?
Why is an old chain unsafe?
Can Medicine checks be used, with decent rolls, to completely mitigate the risk of death from ongoing damage?
Is it possible to make sharp wind that can cut stuff from afar?
What is GPS' 19 year rollover and does it present a cybersecurity issue?
Why does apt-get install python3 with a trailing hyphen remove a lot of packages?
Is there a minimum number of transactions in a block?
Example of a relative pronoun
Can an x86 CPU running in real mode be considered to be basically an 8086 CPU?
how to create a data type and make it available in all Databases?
Concept of linear mappings are confusing me
Why was the small council so happy for Tyrion to become the Master of Coin?
How does one intimidate enemies without having the capacity for violence?
Extreme, but not acceptable situation and I can't start the work tomorrow morning
What typically incentivizes a professor to change jobs to a lower ranking university?
Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)
Email Account under attack (really) - anything I can do?
Why is this code 6.5x slower with optimizations enabled?
I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine
Is there really no realistic way for a skeleton monster to move around without magic?
Finding optimal indices, different query competing indices
Optimizing ORDER BY in a full text search queryHow to index WHERE (start_date >= '2013-12-15')How can I speed up a Postgres query containing lots of Joins with an ILIKE conditionpostgres explain plan with giant gaps between operationsSlow fulltext search due to wildly inaccurate row estimatesIndex for numeric field is not usedpostgresql 9.2 hash join issueSorting killing my postgresql queryHow to make DISTINCT ON faster in PostgreSQL?Why is this query with WHERE, ORDER BY and LIMIT so slow?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
My schema:
create TABLE "logs" (
"id" serial not null default nextval('logs_id_seq'::regclass),
"request" varchar(1024),
"token" varchar(512) default NULL,
"was_batch_request" bool not null default false,
"created" timestamp,
PRIMARY KEY ("id")
);
CREATE UNIQUE INDEX "idx_logs_token_created" ON "logs" ("token", "created");
CREATE UNIQUE INDEX "idx_logs_created" ON "logs" ("created");
I've around 5.000.000 entries with varying token
and I've two queries I regularly perform against this table. Think of token
usually having 200 to 500 chars. There are about 600 distinct token
in this table.
Query 1:
SELECT
TO_CHAR("logs".created, 'YYYYMMDDHH24') AS date
, COUNT(*) AS num
FROM "public"."logs" AS "logs"
WHERE "logs"."created" BETWEEN NOW() - INTERVAL '24 hour' AND NOW()
GROUP BY TO_CHAR("logs"."created", 'YYYYMMDDHH24')
ORDER BY "date" DESC
LIMIT 24
For this query I've an index on created
.
Query 2
SELECT count(*)
FROM logs
WHERE token = 'token_600_chars_long'
AND
created >= NOW() - INTERVAL '1 day';
For this query I created an index on token,created
.
Some observations:
- Query 1 takes around 2s
- Query 2 takes around 1s
The issue with Query 1 .. unknown. It uses the created
index; if the created
index does not exist, it takes as long as 5s.
The issue I see with Query 2: it does not use the token,created
index but the created
index.
However, when I do the following changes, Query 2 uses the token,created
index:
- I drop
created
index (but this slows Query 1 down to 5s) - I change the interval to e.g.
4 days
and then it starts using thetoken,created
index and finishes in 100-200ms.
Explain for Query 1:
QUERY PLAN
Limit (cost=75793.01..75793.07 rows=24 width=8) (actual time=2077.100..2077.103 rows=23 loops=1)
-> Sort (cost=75793.01..75814.66 rows=8662 width=8) (actual time=2077.098..2077.099 rows=23 loops=1)
Sort Key: (to_char(created, 'YYYYMMDDHH24'::text))
Sort Method: quicksort Memory: 26kB
-> HashAggregate (cost=75442.85..75551.12 rows=8662 width=8) (actual time=2076.830..2076.979 rows=23 loops=1)
Group Key: to_char(created, 'YYYYMMDDHH24'::text)
-> Index Only Scan using idx_logs_created on logs ""logs"" (cost=0.44..72326.16 rows=623337 width=8) (actual time=0.244..1804.994 rows=876114 loops=1)
Index Cond: ((created >= (now() - '24:00:00'::interval)) AND (created <= now()))
Heap Fetches: 876143
Planning time: 0.968 ms
Execution time: 2077.309 ms
Explain for Query 2:
QUERY PLAN
Aggregate (cost=70856.83..70856.84 rows=1 width=0) (actual time=640.327..640.327 rows=1 loops=1)
Output: count(*)
-> Index Scan using idx_logs_created on public.logs (cost=0.44..70795.30 rows=24610 width=0) (actual time=0.096..637.190 rows=30863 loops=1)
Output: id, request, token, was_batch_request, created
Index Cond: (logs.created >= (now() - '1 day'::interval))
Filter: ((logs.token)::text = 'DazToken'::text)
Rows Removed by Filter: 843711
Planning time: 0.381 ms
Execution time: 640.417 ms
In MySQL I could force certain queries to use specific, to my knowledge PostgreSQL does not provide such a think.
Is it possible to speed up the queries with the current schema?
postgresql index
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
My schema:
create TABLE "logs" (
"id" serial not null default nextval('logs_id_seq'::regclass),
"request" varchar(1024),
"token" varchar(512) default NULL,
"was_batch_request" bool not null default false,
"created" timestamp,
PRIMARY KEY ("id")
);
CREATE UNIQUE INDEX "idx_logs_token_created" ON "logs" ("token", "created");
CREATE UNIQUE INDEX "idx_logs_created" ON "logs" ("created");
I've around 5.000.000 entries with varying token
and I've two queries I regularly perform against this table. Think of token
usually having 200 to 500 chars. There are about 600 distinct token
in this table.
Query 1:
SELECT
TO_CHAR("logs".created, 'YYYYMMDDHH24') AS date
, COUNT(*) AS num
FROM "public"."logs" AS "logs"
WHERE "logs"."created" BETWEEN NOW() - INTERVAL '24 hour' AND NOW()
GROUP BY TO_CHAR("logs"."created", 'YYYYMMDDHH24')
ORDER BY "date" DESC
LIMIT 24
For this query I've an index on created
.
Query 2
SELECT count(*)
FROM logs
WHERE token = 'token_600_chars_long'
AND
created >= NOW() - INTERVAL '1 day';
For this query I created an index on token,created
.
Some observations:
- Query 1 takes around 2s
- Query 2 takes around 1s
The issue with Query 1 .. unknown. It uses the created
index; if the created
index does not exist, it takes as long as 5s.
The issue I see with Query 2: it does not use the token,created
index but the created
index.
However, when I do the following changes, Query 2 uses the token,created
index:
- I drop
created
index (but this slows Query 1 down to 5s) - I change the interval to e.g.
4 days
and then it starts using thetoken,created
index and finishes in 100-200ms.
Explain for Query 1:
QUERY PLAN
Limit (cost=75793.01..75793.07 rows=24 width=8) (actual time=2077.100..2077.103 rows=23 loops=1)
-> Sort (cost=75793.01..75814.66 rows=8662 width=8) (actual time=2077.098..2077.099 rows=23 loops=1)
Sort Key: (to_char(created, 'YYYYMMDDHH24'::text))
Sort Method: quicksort Memory: 26kB
-> HashAggregate (cost=75442.85..75551.12 rows=8662 width=8) (actual time=2076.830..2076.979 rows=23 loops=1)
Group Key: to_char(created, 'YYYYMMDDHH24'::text)
-> Index Only Scan using idx_logs_created on logs ""logs"" (cost=0.44..72326.16 rows=623337 width=8) (actual time=0.244..1804.994 rows=876114 loops=1)
Index Cond: ((created >= (now() - '24:00:00'::interval)) AND (created <= now()))
Heap Fetches: 876143
Planning time: 0.968 ms
Execution time: 2077.309 ms
Explain for Query 2:
QUERY PLAN
Aggregate (cost=70856.83..70856.84 rows=1 width=0) (actual time=640.327..640.327 rows=1 loops=1)
Output: count(*)
-> Index Scan using idx_logs_created on public.logs (cost=0.44..70795.30 rows=24610 width=0) (actual time=0.096..637.190 rows=30863 loops=1)
Output: id, request, token, was_batch_request, created
Index Cond: (logs.created >= (now() - '1 day'::interval))
Filter: ((logs.token)::text = 'DazToken'::text)
Rows Removed by Filter: 843711
Planning time: 0.381 ms
Execution time: 640.417 ms
In MySQL I could force certain queries to use specific, to my knowledge PostgreSQL does not provide such a think.
Is it possible to speed up the queries with the current schema?
postgresql index
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
4
I know it is no answer, but how do you squeeze 600 or 700 characters into avarchar(512)
?
– dezso
Jul 16 '15 at 13:46
Interesting .. I guess I mixed up numbers somewhere. Rest assured they fill thetoken
value; I fixed it, thanks!
– mark
Jul 16 '15 at 20:53
add a comment |
My schema:
create TABLE "logs" (
"id" serial not null default nextval('logs_id_seq'::regclass),
"request" varchar(1024),
"token" varchar(512) default NULL,
"was_batch_request" bool not null default false,
"created" timestamp,
PRIMARY KEY ("id")
);
CREATE UNIQUE INDEX "idx_logs_token_created" ON "logs" ("token", "created");
CREATE UNIQUE INDEX "idx_logs_created" ON "logs" ("created");
I've around 5.000.000 entries with varying token
and I've two queries I regularly perform against this table. Think of token
usually having 200 to 500 chars. There are about 600 distinct token
in this table.
Query 1:
SELECT
TO_CHAR("logs".created, 'YYYYMMDDHH24') AS date
, COUNT(*) AS num
FROM "public"."logs" AS "logs"
WHERE "logs"."created" BETWEEN NOW() - INTERVAL '24 hour' AND NOW()
GROUP BY TO_CHAR("logs"."created", 'YYYYMMDDHH24')
ORDER BY "date" DESC
LIMIT 24
For this query I've an index on created
.
Query 2
SELECT count(*)
FROM logs
WHERE token = 'token_600_chars_long'
AND
created >= NOW() - INTERVAL '1 day';
For this query I created an index on token,created
.
Some observations:
- Query 1 takes around 2s
- Query 2 takes around 1s
The issue with Query 1 .. unknown. It uses the created
index; if the created
index does not exist, it takes as long as 5s.
The issue I see with Query 2: it does not use the token,created
index but the created
index.
However, when I do the following changes, Query 2 uses the token,created
index:
- I drop
created
index (but this slows Query 1 down to 5s) - I change the interval to e.g.
4 days
and then it starts using thetoken,created
index and finishes in 100-200ms.
Explain for Query 1:
QUERY PLAN
Limit (cost=75793.01..75793.07 rows=24 width=8) (actual time=2077.100..2077.103 rows=23 loops=1)
-> Sort (cost=75793.01..75814.66 rows=8662 width=8) (actual time=2077.098..2077.099 rows=23 loops=1)
Sort Key: (to_char(created, 'YYYYMMDDHH24'::text))
Sort Method: quicksort Memory: 26kB
-> HashAggregate (cost=75442.85..75551.12 rows=8662 width=8) (actual time=2076.830..2076.979 rows=23 loops=1)
Group Key: to_char(created, 'YYYYMMDDHH24'::text)
-> Index Only Scan using idx_logs_created on logs ""logs"" (cost=0.44..72326.16 rows=623337 width=8) (actual time=0.244..1804.994 rows=876114 loops=1)
Index Cond: ((created >= (now() - '24:00:00'::interval)) AND (created <= now()))
Heap Fetches: 876143
Planning time: 0.968 ms
Execution time: 2077.309 ms
Explain for Query 2:
QUERY PLAN
Aggregate (cost=70856.83..70856.84 rows=1 width=0) (actual time=640.327..640.327 rows=1 loops=1)
Output: count(*)
-> Index Scan using idx_logs_created on public.logs (cost=0.44..70795.30 rows=24610 width=0) (actual time=0.096..637.190 rows=30863 loops=1)
Output: id, request, token, was_batch_request, created
Index Cond: (logs.created >= (now() - '1 day'::interval))
Filter: ((logs.token)::text = 'DazToken'::text)
Rows Removed by Filter: 843711
Planning time: 0.381 ms
Execution time: 640.417 ms
In MySQL I could force certain queries to use specific, to my knowledge PostgreSQL does not provide such a think.
Is it possible to speed up the queries with the current schema?
postgresql index
My schema:
create TABLE "logs" (
"id" serial not null default nextval('logs_id_seq'::regclass),
"request" varchar(1024),
"token" varchar(512) default NULL,
"was_batch_request" bool not null default false,
"created" timestamp,
PRIMARY KEY ("id")
);
CREATE UNIQUE INDEX "idx_logs_token_created" ON "logs" ("token", "created");
CREATE UNIQUE INDEX "idx_logs_created" ON "logs" ("created");
I've around 5.000.000 entries with varying token
and I've two queries I regularly perform against this table. Think of token
usually having 200 to 500 chars. There are about 600 distinct token
in this table.
Query 1:
SELECT
TO_CHAR("logs".created, 'YYYYMMDDHH24') AS date
, COUNT(*) AS num
FROM "public"."logs" AS "logs"
WHERE "logs"."created" BETWEEN NOW() - INTERVAL '24 hour' AND NOW()
GROUP BY TO_CHAR("logs"."created", 'YYYYMMDDHH24')
ORDER BY "date" DESC
LIMIT 24
For this query I've an index on created
.
Query 2
SELECT count(*)
FROM logs
WHERE token = 'token_600_chars_long'
AND
created >= NOW() - INTERVAL '1 day';
For this query I created an index on token,created
.
Some observations:
- Query 1 takes around 2s
- Query 2 takes around 1s
The issue with Query 1 .. unknown. It uses the created
index; if the created
index does not exist, it takes as long as 5s.
The issue I see with Query 2: it does not use the token,created
index but the created
index.
However, when I do the following changes, Query 2 uses the token,created
index:
- I drop
created
index (but this slows Query 1 down to 5s) - I change the interval to e.g.
4 days
and then it starts using thetoken,created
index and finishes in 100-200ms.
Explain for Query 1:
QUERY PLAN
Limit (cost=75793.01..75793.07 rows=24 width=8) (actual time=2077.100..2077.103 rows=23 loops=1)
-> Sort (cost=75793.01..75814.66 rows=8662 width=8) (actual time=2077.098..2077.099 rows=23 loops=1)
Sort Key: (to_char(created, 'YYYYMMDDHH24'::text))
Sort Method: quicksort Memory: 26kB
-> HashAggregate (cost=75442.85..75551.12 rows=8662 width=8) (actual time=2076.830..2076.979 rows=23 loops=1)
Group Key: to_char(created, 'YYYYMMDDHH24'::text)
-> Index Only Scan using idx_logs_created on logs ""logs"" (cost=0.44..72326.16 rows=623337 width=8) (actual time=0.244..1804.994 rows=876114 loops=1)
Index Cond: ((created >= (now() - '24:00:00'::interval)) AND (created <= now()))
Heap Fetches: 876143
Planning time: 0.968 ms
Execution time: 2077.309 ms
Explain for Query 2:
QUERY PLAN
Aggregate (cost=70856.83..70856.84 rows=1 width=0) (actual time=640.327..640.327 rows=1 loops=1)
Output: count(*)
-> Index Scan using idx_logs_created on public.logs (cost=0.44..70795.30 rows=24610 width=0) (actual time=0.096..637.190 rows=30863 loops=1)
Output: id, request, token, was_batch_request, created
Index Cond: (logs.created >= (now() - '1 day'::interval))
Filter: ((logs.token)::text = 'DazToken'::text)
Rows Removed by Filter: 843711
Planning time: 0.381 ms
Execution time: 640.417 ms
In MySQL I could force certain queries to use specific, to my knowledge PostgreSQL does not provide such a think.
Is it possible to speed up the queries with the current schema?
postgresql index
postgresql index
edited Jul 16 '15 at 20:53
mark
asked Jul 16 '15 at 5:54
markmark
1697
1697
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
4
I know it is no answer, but how do you squeeze 600 or 700 characters into avarchar(512)
?
– dezso
Jul 16 '15 at 13:46
Interesting .. I guess I mixed up numbers somewhere. Rest assured they fill thetoken
value; I fixed it, thanks!
– mark
Jul 16 '15 at 20:53
add a comment |
4
I know it is no answer, but how do you squeeze 600 or 700 characters into avarchar(512)
?
– dezso
Jul 16 '15 at 13:46
Interesting .. I guess I mixed up numbers somewhere. Rest assured they fill thetoken
value; I fixed it, thanks!
– mark
Jul 16 '15 at 20:53
4
4
I know it is no answer, but how do you squeeze 600 or 700 characters into a
varchar(512)
?– dezso
Jul 16 '15 at 13:46
I know it is no answer, but how do you squeeze 600 or 700 characters into a
varchar(512)
?– dezso
Jul 16 '15 at 13:46
Interesting .. I guess I mixed up numbers somewhere. Rest assured they fill the
token
value; I fixed it, thanks!– mark
Jul 16 '15 at 20:53
Interesting .. I guess I mixed up numbers somewhere. Rest assured they fill the
token
value; I fixed it, thanks!– mark
Jul 16 '15 at 20:53
add a comment |
1 Answer
1
active
oldest
votes
The query planner seems to think that created
is more selective than token
. Are your table statistics up to date? Check the last_analyze
column in:
SELECT schemaname, relname, last_analyze
FROM pg_stat_all_tables
WHERE relname = 'logs'
To really force the issue, you can use a temporary table:
begin;
create temporary table tmp_logs (created timestamp);
insert tmp_logs select created from logs where token = 'token_600_chars_long';
select count(*) from tmp_logs where created >= NOW() - INTERVAL '1 day';
commit;
The first query only uses a condition on token
, so the optimizer can only use an index that starts with token
.
This query is running every few minutes (yes, even when the interval says1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I didANALYZE
it but the queries execution andEXPLAIN
output didn't change.
– mark
Jul 17 '15 at 9:51
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f107147%2ffinding-optimal-indices-different-query-competing-indices%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The query planner seems to think that created
is more selective than token
. Are your table statistics up to date? Check the last_analyze
column in:
SELECT schemaname, relname, last_analyze
FROM pg_stat_all_tables
WHERE relname = 'logs'
To really force the issue, you can use a temporary table:
begin;
create temporary table tmp_logs (created timestamp);
insert tmp_logs select created from logs where token = 'token_600_chars_long';
select count(*) from tmp_logs where created >= NOW() - INTERVAL '1 day';
commit;
The first query only uses a condition on token
, so the optimizer can only use an index that starts with token
.
This query is running every few minutes (yes, even when the interval says1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I didANALYZE
it but the queries execution andEXPLAIN
output didn't change.
– mark
Jul 17 '15 at 9:51
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
add a comment |
The query planner seems to think that created
is more selective than token
. Are your table statistics up to date? Check the last_analyze
column in:
SELECT schemaname, relname, last_analyze
FROM pg_stat_all_tables
WHERE relname = 'logs'
To really force the issue, you can use a temporary table:
begin;
create temporary table tmp_logs (created timestamp);
insert tmp_logs select created from logs where token = 'token_600_chars_long';
select count(*) from tmp_logs where created >= NOW() - INTERVAL '1 day';
commit;
The first query only uses a condition on token
, so the optimizer can only use an index that starts with token
.
This query is running every few minutes (yes, even when the interval says1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I didANALYZE
it but the queries execution andEXPLAIN
output didn't change.
– mark
Jul 17 '15 at 9:51
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
add a comment |
The query planner seems to think that created
is more selective than token
. Are your table statistics up to date? Check the last_analyze
column in:
SELECT schemaname, relname, last_analyze
FROM pg_stat_all_tables
WHERE relname = 'logs'
To really force the issue, you can use a temporary table:
begin;
create temporary table tmp_logs (created timestamp);
insert tmp_logs select created from logs where token = 'token_600_chars_long';
select count(*) from tmp_logs where created >= NOW() - INTERVAL '1 day';
commit;
The first query only uses a condition on token
, so the optimizer can only use an index that starts with token
.
The query planner seems to think that created
is more selective than token
. Are your table statistics up to date? Check the last_analyze
column in:
SELECT schemaname, relname, last_analyze
FROM pg_stat_all_tables
WHERE relname = 'logs'
To really force the issue, you can use a temporary table:
begin;
create temporary table tmp_logs (created timestamp);
insert tmp_logs select created from logs where token = 'token_600_chars_long';
select count(*) from tmp_logs where created >= NOW() - INTERVAL '1 day';
commit;
The first query only uses a condition on token
, so the optimizer can only use an index that starts with token
.
answered Jul 16 '15 at 11:58
AndomarAndomar
2,8621728
2,8621728
This query is running every few minutes (yes, even when the interval says1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I didANALYZE
it but the queries execution andEXPLAIN
output didn't change.
– mark
Jul 17 '15 at 9:51
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
add a comment |
This query is running every few minutes (yes, even when the interval says1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I didANALYZE
it but the queries execution andEXPLAIN
output didn't change.
– mark
Jul 17 '15 at 9:51
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
This query is running every few minutes (yes, even when the interval says
1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I did ANALYZE
it but the queries execution and EXPLAIN
output didn't change.– mark
Jul 17 '15 at 9:51
This query is running every few minutes (yes, even when the interval says
1 day
) thus it doesn't seem feasible to me create a temporary table for this which would not be so temporary then. The table was not analyzed it seems; I did ANALYZE
it but the queries execution and EXPLAIN
output didn't change.– mark
Jul 17 '15 at 9:51
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
@mark: A temporary table lives just as long as the transaction. Postgres creates temporary tables under the hood to store the results of joins, hashes, subqueries and so on. We have queries with temporary tables that run 50 times a second.
– Andomar
Jul 17 '15 at 11:14
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
I tried it but didn't see any improvements going below the initial time.
– mark
Jul 17 '15 at 20:12
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f107147%2ffinding-optimal-indices-different-query-competing-indices%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
I know it is no answer, but how do you squeeze 600 or 700 characters into a
varchar(512)
?– dezso
Jul 16 '15 at 13:46
Interesting .. I guess I mixed up numbers somewhere. Rest assured they fill the
token
value; I fixed it, thanks!– mark
Jul 16 '15 at 20:53