


How Can I Calculate Working Hours Between Dates in PostgreSQL, Considering Weekends and Specific Working Hours?
Jan 03, 2025 am 10:35 AMCalculating Working Hours Between Dates in PostgreSQL
Introduction
In various scenarios, determining the number of working hours between two timestamps can prove to be essential in fields such as payroll and scheduling. In PostgreSQL, this calculation requires careful consideration of weekday and time-specific parameters. This article outlines a comprehensive solution, taking into account the following criteria:
- Weekends (Saturdays and Sundays) are excluded from working hours.
- Working hours are defined as Monday through Friday, 8 am to 3 pm.
- Fractional hours are to be included in the calculation.
Solution
Method 1: Rounded Results for Just Two Timestamps
This approach operates on units of 1 hour, ignoring fractional hours. It is a simple but less precise method.
Query:
SELECT count(*) AS work_hours FROM generate_series (timestamp '2013-06-24 13:30' , timestamp '2013-06-24 15:29' - interval '1h' , interval '1h') h WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= '08:00' AND h::time &lt;= '14:00';
Example Input:
2013-06-24 13:30, 2013-06-24 15:29
Output:
2
Method 2: Rounded Results for a Table of Timestamps
This approach extends the previous method to handle a table of timestamp pairs.
Query:
SELECT t_id, count(*) AS work_hours FROM ( SELECT t_id, generate_series (t_start, t_end - interval '1h', interval '1h') AS h FROM t ) sub WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= '08:00' AND h::time <= '14:00' GROUP BY 1 ORDER BY 1;
Method 3: More Precise Calculation
For a finer-grained calculation, smaller time units can be considered.
Query:
SELECT t_id, count(*) * interval '5 min' AS work_interval FROM ( SELECT t_id, generate_series (t_start, t_end - interval '5 min', interval '5 min') AS h FROM t ) sub WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= '08:00' AND h::time <= '14:55' GROUP BY 1 ORDER BY 1;
Example Input:
| t_id | t_start | t_end | |------|-------------------------|-------------------------| | 1 | 2009-12-03 14:00:00 | 2009-12-04 09:00:00 | | 2 | 2009-12-03 15:00:00 | 2009-12-07 08:00:00 | | 3 | 2013-06-24 07:00:00 | 2013-06-24 12:00:00 | | 4 | 2013-06-24 12:00:00 | 2013-06-24 23:00:00 | | 5 | 2013-06-23 13:00:00 | 2013-06-25 11:00:00 | | 6 | 2013-06-23 14:01:00 | 2013-06-24 08:59:00 |
Output:
| t_id | work_interval | |------|----------------| | 1 | 1 hour | | 2 | 8 hours | | 3 | 0 hours | | 4 | 0 hours | | 5 | 6 hours | | 6 | 1 hour |
Method 4: Exact Results
This approach provides exact results with microsecond precision. It is more complex but more computationally efficient.
Query:
WITH var AS (SELECT '08:00'::time AS v_start , '15:00'::time AS v_end) SELECT t_id , COALESCE(h.h, '0') -- add / subtract fractions - CASE WHEN EXTRACT(ISODOW FROM t_start) < 6 AND t_start::time > v_start AND t_start::time < v_end THEN t_start - date_trunc('hour', t_start) ELSE '0'::interval END + CASE WHEN EXTRACT(ISODOW FROM t_end) < 6 AND t_end::time > v_start AND t_end::time < v_end THEN t_end - date_trunc('hour', t_end) ELSE '0'::interval END AS work_interval FROM t CROSS JOIN var LEFT JOIN ( -- count full hours, similar to above solutions SELECT t_id, count(*)::int * interval '1h' AS h FROM ( SELECT t_id, v_start, v_end , generate_series (date_trunc('hour', t_start) , date_trunc('hour', t_end) - interval '1h' , interval '1h') AS h FROM t, var ) sub WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= v_start AND h::time <= v_end - interval '1h' GROUP BY 1 ) h USING (t_id) ORDER BY 1;
This comprehensive solution addresses the need to calculate working hours accurately and efficiently in PostgreSQL.
The above is the detailed content of How Can I Calculate Working Hours Between Dates in PostgreSQL, Considering Weekends and Specific Working Hours?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

GTID (Global Transaction Identifier) ??solves the complexity of replication and failover in MySQL databases by assigning a unique identity to each transaction. 1. It simplifies replication management, automatically handles log files and locations, allowing slave servers to request transactions based on the last executed GTID. 2. Ensure consistency across servers, ensure that each transaction is applied only once on each server, and avoid data inconsistency. 3. Improve troubleshooting efficiency. GTID includes server UUID and serial number, which is convenient for tracking transaction flow and accurately locate problems. These three core advantages make MySQL replication more robust and easy to manage, significantly improving system reliability and data integrity.

MySQL main library failover mainly includes four steps. 1. Fault detection: Regularly check the main library process, connection status and simple query to determine whether it is downtime, set up a retry mechanism to avoid misjudgment, and can use tools such as MHA, Orchestrator or Keepalived to assist in detection; 2. Select the new main library: select the most suitable slave library to replace it according to the data synchronization progress (Seconds_Behind_Master), binlog data integrity, network delay and load conditions, and perform data compensation or manual intervention if necessary; 3. Switch topology: Point other slave libraries to the new master library, execute RESETMASTER or enable GTID, update the VIP, DNS or proxy configuration to

The steps to connect to the MySQL database are as follows: 1. Use the basic command format mysql-u username-p-h host address to connect, enter the username and password to log in; 2. If you need to directly enter the specified database, you can add the database name after the command, such as mysql-uroot-pmyproject; 3. If the port is not the default 3306, you need to add the -P parameter to specify the port number, such as mysql-uroot-p-h192.168.1.100-P3307; In addition, if you encounter a password error, you can re-enter it. If the connection fails, check the network, firewall or permission settings. If the client is missing, you can install mysql-client on Linux through the package manager. Master these commands

IndexesinMySQLimprovequeryspeedbyenablingfasterdataretrieval.1.Theyreducedatascanned,allowingMySQLtoquicklylocaterelevantrowsinWHEREorORDERBYclauses,especiallyimportantforlargeorfrequentlyqueriedtables.2.Theyspeedupjoinsandsorting,makingJOINoperation

InnoDB is MySQL's default storage engine because it outperforms other engines such as MyISAM in terms of reliability, concurrency performance and crash recovery. 1. It supports transaction processing, follows ACID principles, ensures data integrity, and is suitable for key data scenarios such as financial records or user accounts; 2. It adopts row-level locks instead of table-level locks to improve performance and throughput in high concurrent write environments; 3. It has a crash recovery mechanism and automatic repair function, and supports foreign key constraints to ensure data consistency and reference integrity, and prevent isolated records and data inconsistencies.

MySQL's default transaction isolation level is RepeatableRead, which prevents dirty reads and non-repeatable reads through MVCC and gap locks, and avoids phantom reading in most cases; other major levels include read uncommitted (ReadUncommitted), allowing dirty reads but the fastest performance, 1. Read Committed (ReadCommitted) ensures that the submitted data is read but may encounter non-repeatable reads and phantom readings, 2. RepeatableRead default level ensures that multiple reads within the transaction are consistent, 3. Serialization (Serializable) the highest level, prevents other transactions from modifying data through locks, ensuring data integrity but sacrificing performance;

MySQL transactions follow ACID characteristics to ensure the reliability and consistency of database transactions. First, atomicity ensures that transactions are executed as an indivisible whole, either all succeed or all fail to roll back. For example, withdrawals and deposits must be completed or not occur at the same time in the transfer operation; second, consistency ensures that transactions transition the database from one valid state to another, and maintains the correct data logic through mechanisms such as constraints and triggers; third, isolation controls the visibility of multiple transactions when concurrent execution, prevents dirty reading, non-repeatable reading and fantasy reading. MySQL supports ReadUncommitted and ReadCommi.

To add MySQL's bin directory to the system PATH, it needs to be configured according to the different operating systems. 1. Windows system: Find the bin folder in the MySQL installation directory (the default path is usually C:\ProgramFiles\MySQL\MySQLServerX.X\bin), right-click "This Computer" → "Properties" → "Advanced System Settings" → "Environment Variables", select Path in "System Variables" and edit it, add the MySQLbin path, save it and restart the command prompt and enter mysql--version verification; 2.macOS and Linux systems: Bash users edit ~/.bashrc or ~/.bash_
