Performance Optimization :
- Teradata makes itself the decision to use the index or not – if you are not careful you spend time in table updates to keep up an index which is no used at all (one cannot give the query optimizer hints to use some index – though collecting of statistics may affect the optimizer strategy
- In the MP-RAS environment, look at the script "/etc/gsc/bin/perflook.sh". This will provide a system-wide snapshot in a series of files. The GSC uses this data for incident analysis.
- When using an index one must keep sure that the index condition is met in the sub queries "using IN, nested queries, or derived tables"
- Indication of the proper index use is found by explain log entry "a ROW HASH MATCH SCAN across ALL-AMPS"
- If the index is not used the result of the analysis is the 'FULL TABLE SCAN' where the performance time grows when the size of the history table grows
- Keeping up an index information is a time/space consuming issue. Sometimes Teradata is much better when you "manually" imitate the index just building it from scratch.
- keeping up join index might help, but you cannot multiload to a table which is a part of the join index – loading with 'tpump' or pure 'SQL' is OK but does not perform as well. Dropping and re-creating a join index with a big table takes time and space.
- when your Teradata "explain" gives '25' steps from your query (even without the update of the results) and the actual query is a join of six or more tables
Case e.g :
We had already given up updating the secondary indexes – because we have not had much use for them.
After some trials and errors we ended up to the strategy, where the actual "purchase frequency analysis" is never made "directly" against the history table.
Instead:
- There is a "one-shot" run to build the initial "customer's previous purchase" from the "purchase history" – it takes time, but that time is saved later
- The purchase frequency is calculated by joining the "latest purchase" with the "customer's previous purchase".
- When the "latest purchase" rows are inserted to the "purchase history" the "customer's previous purchase" table is dropped and recreated by merging the "customer's previous purchase" with the "latest purchase"
- By following these steps the performance is not too fast yet (about 25 minutes in our two node system) for a bunch of almost 1.000.000 latest receipts – but it is tolerable now.
- (We also tested by adding both the previous and latest purchase to the same table, but because its size was in average case much bigger than the pure "latest purchase", the self-join was slower in that case)
*********
MANAGING CONCURRENT WORKLOADS
Integrated e-commerce efforts present many warehouse challenges. Here's how Teradata can help.
The word e-commerce means many things to many people. Although for some it connotes only the Web, the real value of e-commerce can only be realized when all channels of a business are integrated and have full access to all customer information and transactions. In fact, to me, e-commerce means using the rich technology available today to bring added value to the customer and additional value to the business through all customer interaction channels.
Under this definition of e-commerce, an active warehouse is at the epicenter, providing the storage and access for decision making in the e-commerce world. As more and more companies adopt active warehousing for this purpose, data warehouse workloads are expanding and changing.
If your warehouse relies on a Teradata DBMS, you'll find that handling the challenge of high-volume, widely varying, disparate service-level workloads is one of its core competencies. One of the biggest concerns I hear from customers is how to deal with the quickly rising number of concurrent queries and concurrent users that can result from active warehousing and e-commerce initiatives. Expected service levels vary widely among different groups of users, as do query types. And, of course, the entire workload must scale upward linearly as the demand increases, ideally with a minimum of effort required from users and systems staff. Here's a look at some of the most frequent questions I receive on the subject of mixed workloads and concurrency requirements.
How do I balance the work coming in across all nodes of my Teradata configuration?
You don't. Teradata automatically balances sessions across all nodes to evenly distribute work across the entire parallel configuration. Users connect to the system as a whole rather than a specific node, and the system uses a balancing algorithm to assign their sessions to a node. Balancing requires no effort from users or system administrators.
Does Teradata balance the work queries cause?
The even distribution of data is the key to parallelism and scalability in Teradata. Each query request is sent to all units of parallelism, each of which has an even portion of the data to process, resulting in even work distribution across the entire system.
For short queries and update flow typical of Web interactions, the optimizer recognizes that only a single unit of parallelism is needed. A query coordinator routes the work to the unit of parallelism needed to process the request. The hashing algorithm does not cluster related data, but spreads it out across the entire system. For example, this month's data and even today's data is evenly distributed across all units of parallelism, which means the work to update or look at that data is evenly distributed.
Will many concurrent requests cause bottlenecks in query coordination?
Query coordination is carried out by a fully parallel parsing engine (PE) component. Usually, one or more PEs are present on each node. Each PE handles the requests for a set of sessions, and sessions are spread evenly across all configured PEs. Each PE is multithreaded, so it can handle many requests concurrently. And each PE is independent of the others with no required cross-coordination. The number of users logged on and requests in flight are limited only by the number of PEs in the configuration.
How do you avoid bottlenecks when the query coordinator must retrieve information from the data dictionary?
In Teradata, the DBMS itself manages the data dictionary. Each dictionary table is simply a relational table, parallelized across all nodes. The same query engine that manages user workloads also manages the dictionary access, using all nodes for processing dictionary information to spread the load and avoid bottlenecks. The PE even caches recently used dictionary information in memory. Because each PE has its own cache, there is no coordination overhead. The cache for each PE learns the dictionary information most likely to be needed by the sessions assigned to it.
With a large volume of work, how can all requests execute at once?
As in any computer system, the total number of items that can execute at the same time is always limited to the number of CPUs available. Teradata uses the scheduling services Unix and NT provide to handle all the threads of execution running concurrently. Some requests might also exist on other queues inside the system, waiting for I/O from the disk or a message from the BYNET, for example. Each work item runs in a thread; each thread gets a turn at the CPU until it needs to wait for some external event or until it completes the current work. Teradata configures several units of parallelism in each SMP node. Each unit of parallelism contains many threads of execution that aren't restricted to a particular CPU; therefore, every thread gets to compete equally for the CPUs in the SMP node.
There is a limit, of course, to the number of pieces of work that can actually have a thread allocated in a unit of parallelism. Once that limit is reached, Teradata queues work for the threads. Each thread is context free, which means that it is not assigned to any session, transaction, or request. Therefore, each thread is free to work on whatever is next on the queue. The unit of work on the queue is a processing step for a request. Combining the queuing of steps with context-free threads allows Teradata to share the processing service equally across all the concurrent requests in the system. From the users' point of view, all the requests in the system are running, receiving service, and sharing system resources.
How does Teradata avoid resource contention and the resulting performance and management problems?
Teradata algorithms are very resource efficient. Other DBMSs optimize for single-query performance by giving all resources to the single query. But Teradata optimizes for throughput of many concurrent queries by allocating resources sparingly and using them efficiently. This kind of optimization helps avoid wide performance variations that can occur depending on the number of concurrent queries.
When faced with a workload that requires more system resources than are available, Teradata tunes itself to that workload. Thrashing, a common performance failure mode in computer systems, occurs when the system has fewer resources than the current workload requires and begins using more processing time to manage resources than to do the work. With most databases, a DBA would tune the system to avoid thrashing. However, Teradata adjusts automatically to workload changes by adjusting the amount of running work and internally pushing back incoming work. Each unit of parallelism manages this flow control mechanism independently.
If all concurrent work shares resources evenly, how are different service levels provided to different users?
The Priority Scheduler Facility (PSF) in Teradata manages service levels among different parts of the workload. PSF allows granular control of system resources. The system administrator can define up to five resource partitions; each partition contains four available priorities. Together, they provide 20 allocation groups (AGs) to which portions of the workload are assigned by an attribute of the logon ID for the user or application. The administrator assigns each AG a portion of the total system resources and a scheduling policy.
For example, the administrator can assign short queries from the Web site a guaranteed 20 percent of system resources and a high priority. In contrast, the administrator might assign medium priority and 10 percent of system resources to more complex queries with lower response-time requirements. Similarly, the administrator might assign data mining queries a low priority and five percent of the total resources, effectively running them in the background. You can define policies so that the resources adjust to the work in the system. For example, you could allow data mining queries to take up all the resources in the system if nothing else is running.
Unlike other scheduling utilities, PSF is fully integrated into the DBMS, not managed at the task or thread level, which makes it easier to use for parallel database workloads. Because PSF is an attribute of the session, it follows the work wherever it goes in the system. Whether that piece of work is executed by a single thread in a single unit of parallelism or in 2,000 threads in 500 units of parallelism, PSF manages it without system administrator involvement.
CPU scheduling is a primary component of PSF, using all the normal techniques (such as quantum size, CPU queues by priority, and so on). However, PSF is endemic throughout the Teradata DBMS. There are many queues inside a DBMS handling a large volume mixed workload. All of those queues are prioritized based on the priority of the work. Thus, a high priority query entered after several lower priority requests that are awaiting their turn to run will go to the head of the queue and will be executed first. I/O is managed by priority. Data warehouse workloads are heavy I/O users, so a large query performing a lot of I/O could hold up a short, high-priority request. PSF puts the high-priority request I/Os to the head of the queue, helping to deliver response time goals.
Data warehouse databases often set the system environment to allow for fast scans. Does Teradata performance suffer when the short work is mixed in?
Because Teradata was designed to handle a high volume of concurrent queries, it doesn't count on sequential scans to produce high performance for queries. Although other DBMS products see a large fall in request performance when they go from a single large query to multiple queries or when a mixed workload is applied, Teradata sees no such performance change. Teradata never plans on sequential access in the first place. In fact, Teradata doesn't even store the data for sequential accesses. Therefore, random accesses from many concurrent requests are just business as usual.
Sync scan algorithms provide additional optimization. When multiple concurrent requests are scanning or joining the same table, their I/O is piggybacked so that only a single I/O is performed to the disk. Multiple concurrent queries can run without increasing the physical I/O load, leaving the I/O bandwidth available for other parts of the workload.
What if work demand exceeds Teradata's capabilities?
There are limits to how much work the engine can handle. A successful data warehouse will almost certainly create a demand for service that is greater than the total processing power available on the system. Teradata always puts into execution any work presented to the DBMS.
If the total demand is greater than the total resources, then controls must be in place before the work enters the DBMS. When your warehouse reaches this stage, you can use Database Query Manager (DBQM) to manage the flow of user requests into the warehouse. DBQM, inserted between the users' ODBC applications and the DBMS, evaluates each request and then applies a set of rules created by the system administrator. If the request violates any of the rules, DBQM notifies the user that the request is denied or deferred to a later time for execution.
Rules can include, for example, system use levels, query cost parameters, time of day, objects accessed, and authorized users.
How do administrators and DBAs stay on top of complex mixed workloads?
The Teradata Manager utility provides a single operational system view for administrators and DBAs. The tool provides real-time performance, logged past performance, users and queries currently executing, management of the schema, and more.
STAYING ACTIVE
- Number of concurrent MLOAD jobs &
- Number of PE’s in the system
SQL Tunning :
- Reduce the Workload
- Balance the Workload
- Parallelize the Workload
- Upgrade
1. Introduction
Performance of Teradata SQL based application depends on several factors, including database design, network latency, and query optimization, hardware specifications. Poorly tuned queries often cause performance problems. The objective of tuning a system is to either reduce the response time for end users of the system, or to reduce the resources used to process the same work. Both these objectives can be achieved in several ways:
- If a commonly executed query needs to access a small percentage of data in the table, then it can be executed more efficiently by using an index. Creation and utilization of such an index, reduces the amount of resources used.
- If a user is looking at the first twenty rows of the 10,000 rows returned in a specific sort order, and if the query (and sort order) can be satisfied by an index, then the user does not need to access and sort the 10,000 rows to see the first 20 rows.
Balance the Workload
Parallelize the Workload
Upgrade
- Recollect statistics where possible.
- Save the EXPLAINs for the base queries.
- Run the test bed again after the upgrade and get the new EXPLAINs.
- Compare the two EXPLAINs.
- Check for faster or slower response on any of the test queries.
- If slower then check for the before and after performance EXPLAIN.
2. Improve SQL statement tuning
2.1 Reviewing the Execution Plan
- The plan is such that the driving table has the best filter.
- The join order in each step means that the fewest number of rows are being returned to the next step (that is, the join order should reflect, where possible, going to the best not-yet-used filters).
- There are any unintentional Cartesian products (even with small tables).
- Each table is being accessed efficiently:
2.2 Restructuring SQL statement
2.2.1 AND or ‘=’ clause
2.2.2 IN or BETWEEN clauses
FROM customer
WHERE customer_number in (1000, 1001, 1002, 1003, 1004);
FROM customer
WHERE customer_number BETWEEN 1000 and 1004
2.2.3 LIKE clause
2.2.4 IN and EXISTS clauses
Below are two examples that demonstrate the benefits of IN and EXISTS. Both examples use the same schema with the following characteristics:
Example 1:
This example demonstrates how rewriting a query to use IN can improve performance. This query identifies all employees who have placed orders on behalf of customer 144.
SELECT e.employee_id, e.first_name, e.last_name, e.salary FROM employees e WHERE EXISTS (SELECT 1 FROM orders o WHERE e.employee_id = o.sales_rep_id AND o.customer_id = 144);
WHERE e.employee_id IN (SELECT o.sales_rep_id FROM orders o WHERE o.customer_id = 144); Explanation: In the query using EXISTS, an extra unnecessary step is being performed by the parent query. From the sub-query, we obtain a table that has equijoined the tables employees and orders on the basis of employee_id and sales_rep_id. Further, the table is filtered for customer_id = 144. The obtained table is the resultant table needed but, an extra step of comparing of employee_id is done once again when the parent query is performed. However, in the query using IN, the subquery returns only a filtered table on the basis of the specified customer_id and the join function is performed by the parent query. Thus, the work performed by the query using IN clause is much less than that of the query using the EXISTS clause.
Example 2:
SELECT e.employee_id, e.first_name, e.last_name, e.department_id, e.salary
FROM employees e
WHERE e.department_id = 80 AND e.job_id = 'SA_REP' AND e.employee_id IN (SELECT o.sales_rep_id FROM orders o);
SELECT e.employee_id, e.first_name, e.last_name, e.salary FROM employees e WHERE e.department_id = 80 AND e.job_id = 'SA_REP' AND EXISTS (SELECT 1 FROM orders o WHERE e.employee_id = o.sales_rep_id);
2.2.5 DISTINCT clause
2.2.6 UNION or UNION ALL clauses
2.2.7 CASE statement
SELECT COUNT (*) FROM employees WHERE salary < 2000; SELECT COUNT (*) FROM employees WHERE salary BETWEEN 2000 AND 4000; SELECT COUNT (*) FROM employees WHERE salary > 4000;
SELECT COUNT (CASE WHEN salary < 2000 THEN 1 ELSE null END) count1, COUNT (CASE WHEN salary BETWEEN 2001 AND 4000 THEN 1 ELSE null END) count2, COUNT (CASE WHEN salary > 4000 THEN 1 ELSE null END) count3 FROM employees;
2.2.8 SELECT DISTINCT clause
AND has precedence over OR
2.3 Eliminate the use of Temporary Tables
2.4 Avoid Mixed Type Expressions
3. Indexes
3.1 Secondary Indexes
- SIs require additional storage to hold their subtables. In the case of a Fallback table, the SI subtables are Fallback also. Twice the additional storage space is required.
- SIs require additional I/O to maintain these subtables.
- When the NUSI is not selective enough.
- When no COLLECTed STATISTICS are available.
3.2 Join Index
3.2.1 Single table Join Index
3.2.2 Aggregate Join Index
3.2.3 Sparse Index
3.2.4 Global (Join) Index
3.2.5 Join Index performance
3.2.5.1 Collecting Statistics
3.2.5.2 Cost considerations
3.2.5.3 Load utilities
3.3 Partitioned Primary Index
- Hash partitioned to the AMPs by the hash of the primary index columns
- Partitioned on some set of columns on each AMP
- Ordered by the hash of the primary index columns within that partition
- Automatic optimization occurs for queries that specify a restrictive condition on the partitioning column.
- Uses partition elimination to improve the efficiency of range searches when, for example, the searches are range partitioned.
- Only the rows of the qualified partitions in a query need to be accessed avoid full table scans.
- Provides an access path to the rows in the base table while still providing efficient join strategies
- If the same partition is consistently targeted, the part of the table updated may be able to fit largely in cache, significantly boosting performance
- The number of partitions defined
- The number of partitions that can be eliminated given the query workloads
- Whether or not the user follows an update strategy that takes advantage of partitioning.
- The PI access disadvantage occurs only when the partitioning column is not part of the PI. In this situation, a query specifying a PI value, but no value for the partitioning column, must look in each partition for that value, instead of positioning directly to the first row for the PI value.
- The direct join disadvantage occurs when another table with the same PI is joined with an equality condition on every PI column. For two non-PPI tables, the rows of the two tables will be ordered the same, and the join can be performed directly. If one of the tables is partitioned, the rows won't be ordered the same, and the task, in effect, becomes a set of sub-joins, one for each partition of the PPI table.
3.4. Index Usage
- Restricting conditions exist on index(es) or column(s) that have collected statistics.
- Restricting conditions exist on index(es) having no statistics, but estimates can be based upon a sampling of the index(es).
- Restricting conditions exist on index(es) or column(s) that have collected statistics but are “AND-ed” together with conditions on non-indexed columns.
- Restricting conditions exist on index(es) or column(s) that have collected statistics but are “OR-ed” together with other conditions. …with no confidence
- Conditions outside the above.