Quantcast
Viewing all 174 articles
Browse latest View live

Perception is NOT reality: TASM for everyone

Teradata Active System Management (TASM) is a powerful tool available to you that gives you unprecedented control over your Teradata system. Teradata 14.0 SLES 11 introduces Teradata Integrated Workload Management (TIWM), which, for the first time, provides workload management capabilities to customers without full TASM licensing.

No two customers are the same, and it follows that there cannot be a one size fits all approach to workload management. In this session, we will cover some common, and some not-so-common resource allocation problems and solutions.

We will demonstrate how to use the intuitive and easy to use Viewpoint TASM portlets for all your workload management needs. Use Workload Designer to create Filters, Throttles and Workloads, and manage allocating resources between them. Learn to use Workload Health and Workload Monitor for live, in-depth analysis of workload performance.

Come and see how to make Workload Management work for you!

Note: This was a 2012 Teradata Partners Conference session.

Teradata Release information:14.0

Presenters:
Betsy Cherian, Software Engineer – Teradata Corporation
Darrick Sogabe, Product Manger – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50478
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Partitioned Primary Indexes (PPI): Getting Started

This session will provide information on the basic concepts of Partitioned Primary Indexes.

What is a partitioned primary index? How do you define a partitioned primary index (PPI)? What are the RANGE_N and CASE_N functions all about? Get the answers and more about this performance enhancing capability during this presentation on the basics of partitioned primary indexes. This topic is of particular interest to physical database designers and database performance analysts. This session will describe the newer partitioning capabilities such as multi-level partitioning, character PPI enhancements (13.10), ALTER TABLE TO CURRENT (13.10), and the increased number of partitions in Teradata 14.0. This session will provide numerous DDL examples of creating partitioned tables using the newest features.

Key Points:

  • Understand the structure of a partitioned primary index and how it is implemented

  • Learn how to create and alter a PPI using various functions such as RANGE_N

  • Understand the advantages and disadvantages of using a partitioned primary index

Presenter: Larry Carter, Senior Learning Consultant – Teradata

Training details
Course Number: 
38855
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

TASM Implementation Tips and Techniques

Defining workloads in TDWM may qualify as a TASM implementation but is just a small piece of the puzzle.  This presentation will look at this and all the other pieces needed to complete the picture.

This presentation will provide you with best practices based on more than 100 TASM implementations.  Learn how to understand  workloads and determine the impact of changes.  Examine how to improve tactical query performance and keep your system from AWT saturation and flow control.  If you’re setting up TASM for the first time, or just want to improve the scheme you have now, then this presentation is for you.

Note: This was a 2012 Teradata Partners Conference session.

Presenters:
Greg Hamilton - Teradata Corporation
Srini Gorella - Teradata Corporation
 

 

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50492
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Entering and Exiting a TASM State

In earlier postings I’ve described how TASM system events can detect such things as AMP worker task shortages, and automatically react to change workload managements settings.   These system events tell TASM to look out for and gracefully react to resource shortages without your direct intervention, by doing things like adjusting throttle limits temporarily downwards for the less critical work.

This switchover happens as a result of TASM moving you from one Health Condition to another, and as a result, from one state to another state.  But how does this shift to a new actually state happen?  And under what conditions you will you be moved back to the previous state?

Timers that control the movement between states

Before you use system events to move you between states, you will need to familiarize yourself with a few timing parameters within TASM, including the following:

  • System-wide Event Interval - The time between asynchronous checks for event occurrences.  It can be set to 5, 10, 30 or 60 seconds, with 60 seconds being the default
  • Event-specific Qualification Time – To ensure the condition is persistent, this is the amount of time the event conditions must be sustained in order to trigger the event
  • Heath Condition Minimum Duration – Prevents a constant flip-flop between states,  the minimum amount of time that a health condition (which points to the new state) must be maintained even when the conditions that triggered the change are no longer present.

Entering a State

Let’s assume you have defined an AWT Available event that triggers when you have only two AMP worker tasks available on any five of your 400 AMPs, with a Qualification Time of 180 seconds.   Assume that you have defined the Health Condition associated with the state to have a Minimum Duration of 10 minutes, representing the time that must pass before the system can move back to the original state. 

TASM takes samples of database metrics in real-time, looking to see if any event thresholds have been met.  It performs this sampling at the frequency of the event interval. 

Once a sampling interval discovers the system is at the minimum AMP worker task level defined in the event, a timer is started.   No state change takes place yet.  The timer continues on as long as each subsequent sample meets the event’s thresholds.  If a new event sample shows that the event thresholds are no longer being met, then the timer will start all over again with the next sample that meets the event’s threshold criteria.

Only when the timer reaches the Qualification Time (180 seconds) will the event be triggered, assuming that all samples along the way have met the event’s threshold.  At that point TASM moves to the new state. 

Exiting a State

Returning to the original state follows a somewhat similar pattern.

The Minimum Duration DOES NOT determine how long you will remain in the new state, but rather it establishes the minimum time that TASM is required to keep you in the new state before reverting back to the original state.  

So when will you exit a state?

Event interval sampling continues at the event interval number of seconds all the while you are in the new state.  Even if the event threshold criteria is no longer being met and available AWTs are detected to be above the available threshold, once the move to the new state has taken place, the new state remains in control for the Minimum Duration.

After the Minimum Duration number of seconds has been passed, if event sampling continues to show that the AWT thresholds are being met (you still have at least five AMPs with only two AWTs available), TASM will continue to stay in that new state.  Only after the first sample that fails to meet the event thresholds is experienced (once the Minimum Duration number of seconds has passed) will control be moved back to the original state.

The bottom line is that you will not return to the original state until the Minimum Duration time of the state's Health Condition has been passed, but you may not be returned then if the condition that triggered the event persists.

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Using DBQL Object Data for Targeting MLPPI and Columnar Candidacy

Have you ever struggled over whether, or how, to use a new database feature?  In this presentation, we’ll demonstrate how database query log (DBQL) data can help when it comes to determining candidates for MLPPI (Multi-Level Partitioned Primary Index) as well as the new Columnar Partitioning feature of Teradata 14.

This presentation will discuss using DBQL object data for gathering such key information as frequency of use, large scan, and total query counts, as well as CPU and I/O usage.  In addition, the presentation will review techniques for testing and validating conclusions, instilling confidence in your decisions.  Finally, learn how the wealth of information available from DBQL can make it easier for you to benefit from the performance advantages that come with features like MLPPI and Columnar.

Note: This was a 2012 Teradata Partners Conference session.

Teradata Release information: TD 13 & 14

Presenters:
Barbara Christjohn, Performance COE – Teradata Corporation
David Hensel, Performance COE – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50495
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

In-Database Analytics and Physical Database Design

It's well-known that the application of database optimization techniques can improve query performance.

However these techniques are often overlooked when it comes to analytical queries (especially those in-database) as those queries are already significantly faster than analysts are used to. However the application of techniques, such as AJIs, MVC, MLPPI, soft RI and columnar partitioning, can all lead to further improvements in processing time. This talk will explain those techniques and show how they can be applied to analytical data sets to increase the productivity of the analysts. Examples from the field will be presented detailing how they benefited the analytic process.

Note: This was a 2012 Teradata Partners Conference session.

Presenter: Paul Segal, PS Consultant – Teradata Corporation

Audience: 
Data Warehouse Analytical Modeler, Data Warehouse Business Users
Training details
Course Number: 
50481
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Teradata Database Architecture Overview

The Teradata Database system is different from other databases in a number of fundamental ways.

If you want to know what these differences are and how they make it possible for Teradata to deliver unlimited scalability in every dimension, high performance, simple management and all the other requirements of an Active Data Warehouse, then this is the presentation for you. We will discuss how the architecture of Teradata enables you to quickly, efficiently and flexibly deliver value to your business.

Key Points:

  • Teradata's key differentiators
  • What the differences mean to the IT staff
  • How the differences result in value to the business

Note: This was a 2012 Teradata Partners Conference session.

Presenter: Todd Walter, Distinguished Fellow – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Program Manager
Training details
Course Number: 
50480
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
3
Channel: 

Statistics Collection in Teradata 14.0

Statistics about tables, columns and indexes are a critical component in producing good query plans.

This session will examine the different options that are available when collecting statistics, with an emphasis on the new USING clauses and other enhancements in Teradata 14.0.  Statistics histograms will be detailed, and the important role that SUMMARY statistics play will be emphasized.   The improved statistics extrapolation process and the ability to combine statistics collections into a single table scan will be covered.  The session will wrap up with some general recommendations for statistics collection in Teradata 14.0.

Key Points:

  • Learn the differences between statistics collection approaches
  • Find out how the new Teradata 14.0 enhancement can benefit you
  • Understand the new statistics collection recommendations for 14.0

Teradata Release information: Teradata 14.0

Presenter: Carrie Ballinger, Software Engineer – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Application Specialist; Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50654
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
2
Channel: 

Recommended Dictionary Statistics to Collect in Teradata 14.0

Collecting statistics on data dictionary tables is an excellent way to tune long-running queries that access multi-table dictionary views.  Third party tools often access the data dictionary several times, primarily using the X views.  SAS, for example, accesses DBC views including IndicesX and TablesX for metadata discovery.  Without statistics, the optimizer may do a poor job in building plans for these complex views, some of which are composed of over 200 lines of code.

In an earlier blog posting I discussed the value of collecting statistics against data dictionary tables, and provided some suggestions about how you can use DBQL to determine which tables and which columns to include.  Go back and review that posting.  This posting is a more comprehensive list of DBC statistics that is updated to include those recommended with JDBC.

Note that the syntax I am using is the new create-index-like syntax available in Teradata 14.0.  If you are on a release prior to 14.0 you will need to rewrite the following statements so they are in the traditional collect statistics SQL.

Here are the recommendations for DBC statistics collection.  Please add a comment if I have overlooked any other useful ones.

COLLECT STATISTICS
 COLUMN TvmId
 , COLUMN UserId
 , COLUMN DatabaseId
 , COLUMN FieldId
 , COLUMN AccessRight
 , COLUMN GrantorID
 , COLUMN CreateUID
 , COLUMN (UserId ,DatabaseId)
 , COLUMN (TVMId ,DatabaseId)
 , COLUMN (TVMId ,UserId)
 , COLUMN (DatabaseId,AccessRight)
 , COLUMN (TVMId,AccessRight)
 , COLUMN (FieldId,AccessRight)
 , COLUMN (AccessRight,CreateUID)
 , COLUMN (AccessRight,GrantorID)
 , COLUMN (TVMId ,DatabaseId,UserId)
ON DBC.AccessRights;


COLLECT STATISTICS
 COLUMN DatabaseId
 , COLUMN DatabaseName
 , COLUMN DatabaseNameI
 , COLUMN OwnerName
 ,  COLUMN LastAlterUID
 , COLUMN JournalId
 , COLUMN (DatabaseName,LastAlterUID)
ON DBC.Dbase;


COLLECT STATISTICS
 COLUMN LogicalHostId
 , INDEX ( HostName )
ON DBC.Hosts;

COLLECT STATISTICS
 COLUMN OWNERID
 , COLUMN OWNEEID
 , COLUMN (OWNEEID ,OWNERID)
ON DBC.Owners;

COLLECT STATISTICS
 COLUMN ROLEID
 , COLUMN ROLENAMEI
ON DBC.Roles;


COLLECT STATISTICS
INDEX (GranteeId)
ON DBC.RoleGrants;


COLLECT STATISTICS 
COLUMN (TableId)
, COLUMN (FieldId)
, COLUMN (FieldName)
, COLUMN (FieldType)
, COLUMN (DatabaseId)
, COLUMN (CreateUID)
, COLUMN (LastAlterUID)
, COLUMN (UDTName)
, COLUMN (TableId, FieldName)
ON DBC.TVFields;


COLLECT STATISTICS
 COLUMN TVMID
 , COLUMN TVMNAME
 , COLUMN TVMNameI
 , COLUMN DATABASEID
 , COLUMN TABLEKIND
 , COLUMN CREATEUID
 , COLUMN CreatorName
 , COLUMN LASTALTERUID
 , COLUMN CommitOpt
 , COLUMN (DatabaseId, TVMName)
 , COLUMN (DATABASEID ,TVMNAMEI)
ON DBC.TVM;

 
COLLECT STATISTICS
 INDEX (TableId) 
 , COLUMN (FieldId)
 , COLUMN (IndexNumber)
 , COLUMN (IndexType)
 , COLUMN (UniqueFlag)
 , COLUMN (CreateUID)
 , COLUMN (LastAlterUID)
 , COLUMN (TableId, DatabaseId)
 , COLUMN (TableId, FieldId)
 , COLUMN (UniqueFlag, FieldId)
 , COLUMN (UniqueFlag, CreateUID)
 , COLUMN (UniqueFlag, LastAlterUID)
 , COLUMN (TableId, IndexNumber, DatabaseId)
ON DBC.Indexes;


COLLECT STATISTICS
 COLUMN (IndexNumber)
 , COLUMN (StatsType)
ON DBC.StatsTbl;


COLLECT STATISTICS
 COLUMN (ObjectId)    
 , COLUMN (FieldId)
 , COLUMN (IndexNumber)
 , COLUMN (DatabaseId, ObjectId, IndexNumber)
ON DBC.ObjectUsage;


COLLECT STATISTICS
 INDEX (FunctionID )
 , COLUMN DatabaseId
 , COLUMN ( DatabaseId ,FunctionName )
ON DBC.UDFInfo;


COLLECT STATISTICS
 COLUMN (TypeName)
 , COLUMN (TypeKind)
ON DBC.UDTInfo;

 

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Teradata Express for VMware Player

Channel: 
Full user details required: 
Full user details required

Download Teradata Express for VMware, a free, fully-functional Teradata database, that can be up and running on your system in minutes. For TDE 14.0, please read the launch announcement, and the user guide. For previous versions, read the general introduction to the new Teradata Express family, or learn how to install and configure Teradata Express for VMware.

There are multiple versions of Teradata Express for VMware available: for Teradata 13.0 and 13.10 and 14.0. More information on each package is available on our main Teradata Express page.

Note that in order to run this VM, you'll need to install VMware Player or VMware Server on your system. Also, please note that your system must have 64-bit support. For more details, see how to install and configure Teradata Express for VMware.

For feedback, discussion, and community support, please visit the Cloud Computing forum.

Download sizes, space requirements and MD5 checksums

Package Version Initial Disk Space Download size (bytes) MD5 checksum
Teradata Express 14.0 for VMware (4 GB) 14.00.00.01 3,105,165,305F8EFE3BBE29F3A3504B19709F791E17A
Teradata Express 14.0 for VMware (40 GB) 14.00.00.01 3,236,758,640B6C81AA693F8C3FB85CC6781A7487731
Teradata Express 14.0 for VMware (1 TB) 14.00.00.01 3,484,921,082 2D335814C61457E0A27763F187842612
Teradata Express 13.10 for VMware (1 TB) 13.10.00.10 15 GB3,002,848,12704e6cb9742f00fe8df34b56733ade533
Teradata Express 13.10 for VMware (40 GB) 13.10.00.10 10 GB2,943,708,647ab1409d8511b55448af4271271cc9c46
Teradata Express 13.0 for VMware (1 TB) 13.00.00.19 64 GB3,072,446,37591665dd69f43cf479558a606accbc4fb
Teradata Express 13.0 for VMware (40 GB) 13.00.00.19 10 GB2,018,812,0705cee084224343a01de4ae3879ada9237
Teradata Express 13.0 for VMware (40 GB, Japanese) 13.00.00.19 10 GB2,051,920,3728e024743aeed8e5de2ade0c0cd16fda9
Teradata Express 13.0 for VMware (4 GB) 13.00.00.19 10 GB2,002,207,4015a4d8754685e80738e90730a0134db9c
Teradata Tools and Utilities 13.10 Windows Client Install Package 13.10.00.00 409,823,3228e2d5b7aaf5ecc43275e9679ad9598b1

 

Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

New Teradata Express 14.0 Versions Available

Short teaser: 
Teradata launches updated versions of Teradata Express 14.0 to the latest patch level.
Cover Image: 
Image may be NSFW.
Clik here to view.

Teradata is announcing new Teradata Express 14.0 images that update those announced last year. These new images are at patch level 14.00.03.02 which represents the current shipping version of Teradata 14.0. This gives you access to the latest patch updates for Teradata 14.0 for your test and development needs.

Depending upon your needs and the resources available on your PC, three versions of Teradata Express14.0 are available. Please note that the resources needed for Teradata Express are in addition to those needed by the operating system on your PC:

  • TD Express 14.0 with 4GB of storage. Requires 13 GB of disk space and 2.0 GB of RAM for the Virtual Machine.
  • TD Express 14.0 with 40GB of storage. Requires 18 GB of disk space and 2.5 GB of RAM for the Virtual Machine.
  • TD Express 14.0 with 1TB of storage. Requires 35 GB of disk space and 4.0 GB of RAM for the Virtual Machine.

A 64-bit virtualization-capable PC is required.  VMware provides a utility to check your system for 64 bit support at this link.

Details on the images remain the same as the original TD 14.0 and can be found here:

http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-now-available

Instructions for running the images remain the same and can be found here:

http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-user-guide

The new images are available at the down load page

Please note that while the Teradata Express family of products is not officially supported, you can talk to other users and get help in the Cloud Computing forum. Note also that Japanese-language instructions for configuring TDE-V are available for download in PDF format.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Big Data - Big Changes

Behind the Big Data hype and buzzword storm are some fundamental additions to the analytic landscape.

What new tools do you need to deal with the new storm of data ripping up your IT infrastructure? What do these new tools do and what are they not good at? How do you choose among these tools to solve the business challenges your company is facing? And how do you tie all the tools together to make them work as an overall analytic ecosystem?

Presenter: Todd Walter, Chief Technologist – Teradata Corporation

Audience: 
Database Administrator, Designer/Architect
Training details
Course Number: 
50782
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
2
Channel: 

Game Changer: How Columnar Partitioning Altered Migrating to Teradata

You know about how the athlete coming off the bench is inserted into a game and radically alters the team play such that the team coasts to victory.  A Game Changer!  So too, is Columnar Partitioning, radically altering the process of physical data model development to ease application migration to Teradata Database. 

We all have large, legacy applications in our environments.  You know the characteristics:  huge tables with a large amount of columns, lots of data, with inadequate or unknown column descriptions, where everyone is loath to remove them for the sake of some query that might need the content.  The result:  An initial physical data model that requires significant alteration once the forklift process is complete.

Teradata Columnar changes the game in migrating to Teradata through swifter model development, better initial performance, and increased user satisfaction allowing the legacy system to be shutdown more quickly.

Teradata Columnar changes the migration game!

Note: This was a 2012 Teradata Partners Conference Presentation.

Presenter: Geoffrey Plummer, PS Principal Consultant – Teradata Corporation

Audience: 
Database Administrator, Designer/Architect, Application Developer
Training details
Course Number: 
50753
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

TD Geo Import/Export Tool 32 bit

Short teaser: 
TD Geo Import/Export tool lets you translate ESRI Shape and KML files to TD format.
Channel: 
Full user details required: 
Full user details required

TD Geo Import/Export tool lets you translate ESRI Shape and KML files to TD format.

Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

Teradata Express 14.10 for VMware Player User Guide

Short teaser: 
How to configure and run Teradata Express 14.10 for VMware Player
Cover Image: 
Image may be NSFW.
Clik here to view.

Teradata Express for VMware Player (TDE-V) is a free, fully operational Teradata Virtual Machine with up to one terabyte of storage. Imagine being able to install a fully operational Teradata database on your PC and be running queries in five minutes, easy as 1-2-3.

Image may be NSFW.
Clik here to view.
After installing VMware Server/Player and downloading your choice of VM, this is all it takes:

  1. Install the VM
  2. Start the VM and Teradata
  3. Use the Teradata Studio Express to run queries

To help you load data, the new Teradata EZLoader utility is included in the VM.

Depending upon your needs and the resources available on your PC, two versions of Teradata Express14.10 are available at this time. Please note that the resources needed for Teradata Express are in addition to those needed by the operating system on your PC:

  • TD Express 14.10 with 40GB of storage with SLES 11 and without Viewpoint requires 2.0 GB of RAM for the Virtual Machine and can run on a host machine with 4 GB of RAM.
  • TD Express 14.10 with 1TB of storage with SLES 10 and Viewpoint requires 4.0 GB of RAM for the Virtual Machine and therefore needs a host machine with 8GB of RAM.
A 64-bit virtualization-capable PC is required.  VMware provides a utility to check your system for 64 bit support at this link.

Please note that while the Teradata Express family of products is not officially supported, you can talk to other users and get help in the Cloud Computing forum. Note also that Japanese-language instructions for configuring TDE-V are available for download in PDF format.

Getting Started

The first task is to make sure you have a system capable of handling VMware and VM’s. There are plenty of details on the VMware site but here are some basic requirements that you should be aware of before getting started:

  1. Since the SLES VM’s are 64-bit, your CPU must support 64-bit operation.
  2. Your CPU must also support Virtualization. Generally there is a BIOS setting which enables this. Google the topic for your particular CPU for more information but most recent PC’s support both 64-bit and Virtualization.

As soon as you determine your system supports the requirements you can proceed:

It's time to run VMware Player and start Teradata Express.  From the VMware Player Welcome page, choose "Open a Virtual Machine" and click your way through your file directory to the Teradata Express folder, looking for the "TDExpress14.10_sles10.vmx" or "TDExpress14.10_sles11.vmx"file.   

 

Image may be NSFW.
Clik here to view.

 

Image may be NSFW.
Clik here to view.

Figure 1. Loading the TD Express virtual machine in VMware Player

I MOVED IT!

This step is very important.  As you click through the virtual image directories, VMware is looking for the .vmx file, in order to start the image.  Once you find it, double-click or choose the 'Open' option.  VMware will now present you with a dialog box asking if you copied or moved this image.  Be sure to choose  'I MOVED IT'!

  1. VMware Player and VMware Server are both available for free download from the VMware site and both will work. This tutorial describes using VMware Server (hosted on a Windows system). If you have not already done so, make your choice and install VMware on your system.
  2. Disk space is a big consideration. 40GB and 1TB versions of TD Express are available depending on your resources and need. Typically you don't actually need the full amount of available disk space (although this would be advisable) to install and get started. Also, isolating VM’s on their own physical disks (if available) can improve performance. Some additional information about disk space is provided below.
  3. Download the appropriate TD Express image from the downloads section.
  4. If you do not already have 7-zip download it here
  5. Create a directory on your C:\ drive named "virtual-machinesvirtual-machines". After unzipping (7-zip) the file you will end up with something like “C:\virtual-machines\TD14..”.
  6. Now you need to add the VM to the VMware inventory. Using VMware Player (see Figure 1 above for reference):
    1. Click on Open A Virtual Machine
    2. Drill down under inventory, highlight the folder and item in contents (TDExpress14.10_sles11.vmx), OK
  7. The VM will show up in the Library under the Home tab and can be started and stopped from there.
  8. Double click on the TD Express image to start.
  9. Login into the SLES VM with username rootand password root. root/root
  10. Wait. The image will take some time to initialize.
  11. Test with bteq (bteq is the standard Teradata command line query tool. It can be invoked from the Linux command line in the Gnome window as follows.
    1. From the shell prompt:TDExpress14.0_Sles10:~ # bteq
    2. When asked for your logon: .logon 127.0.0.1/dbc
    3. When asked for your password: dbc
    4. You should now be in the bteq session (you'll see a message *** Logon successfully completed).
    5. Now, let's execute some SQL. You should see results similar to below except the version will be 14.10.00.02:
      select * from dbcinfo;
      
       *** Query completed. 3 rows found. 2 columns returned.
       *** Total elapsed time was 1 second.
      
      InfoKey                        InfoData
      ------------------------------ --------------------------------------------
      LANGUAGE SUPPORT MODE          Standard
      RELEASE                        14.00.00.01
      VERSION                        14.00.00.01
      
    6. And you can now quit bteq by executing: quit

Image may be NSFW.
Clik here to view.

Figure 2. Testing Teradata using BTEQ

 

If you are running the SLES 11 image without Viewpoint, skip over this next section and go right to Loading Data.

 

Monitoring Teradata Express using Viewpoint

You have the option of running Viewpoint on TD Express 14.10 with the 1TB, SLES 10 image only. To see all you can do with Viewpoint look here.

To login to Viewpoint follow the steps below.

  1. You need to start Firefox to run Viewpoint. Look for the "Computer" tab in the lower left hand corner of your VMware Player desktop. Under the "Applications" tab you see the icon to start Firefox.

    Image may be NSFW.
    Clik here to view.


  2. Wait. Firefox can take several minutes to initialize. Even if you have sufficient VRAM allocated, you will have heavy CPU and disk I/O traffic. Just wait, it will come up.

    Image may be NSFW.
    Clik here to view.

  3. The Login information is pre-populated for you. Just hit "Sign In".
  4. The Set Up screen is already pre-populated for you. Hit "Back to Portal" to get from this screen to the main dashboard.

    Image may be NSFW.
    Clik here to view.

  5. You are now ready to add portlets to your dashboard.


    Image may be NSFW.
    Clik here to view.

 

 

Loading Data

On the Teradata version 14.0 VM’s the new EZloader utility is included for fast and easy data loads. Here's an example of loading some data from a comma-separated file (i.e.. a CSV). From BTEQ, run the following:

CREATE user vmtest AS password=vmtest perm=524288000 spool=524288000;


CREATE SET TABLE vmtest.test ,
NO FALLBACK , 
NO BEFORE JOURNAL, 
NO AFTER JOURNAL, 
CHECKSUM = DEFAULT ( 
Test_field1 INTEGER,
Test_field2 INTEGER)
PRIMARY INDEX ( Test_field1 );

Create a file called testwith contents that look something like this:

1,1
2,2
3,3

Run the load utility:

/opt/teradata/client/13.0/tbuild/bin/tdload -f test -u vmtest -p vmtest -t test

That is it, data loaded!

Running Queries

You can use the Teradata Studio Express to run queries against the database. You can learn more about Teradata Studio Express, and download Teradata Studio Express versions for various platforms. Finally...

Please note that while Teradata Express for VMware is a free, unsupported product, you can talk to other users and ask for help over in the Cloud Computing forum

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

New Teradata Express 14.10 Versions Available

Short teaser: 
Teradata launches new Teradata Express 14.10 images including an image with SLES 11.
Cover Image: 
Image may be NSFW.
Clik here to view.

Teradata is announcing new Teradata Express 14.10 images. These new images are at patch level 14.10.00.02 which represents the current shipping version of Teradata 14.10. This gives you access to the recently released Teradata 14.10 database for your test and development needs.

Important

These images represent a departure from the from the TD 14.0 and TD 13.10 releases.

  1. The SLES 11 image does not include Viewpoint. Viewpoint is not certified yet with SLES 11 so that it could not be included in the virtual container.

  2. The SLES 10 image with Viewpoint requires a host computer with at least 8 GB of physical RAM. Viewpoint functionality has been greatly enhanced and now Data Labs portlets are included.

  3. The SLES 10 image with Viewpoint is larger than previous TD Express images, 4.4GB compressed.

  4. The installation instructions have been slightly modified. Please use the new version for TD 14.10.

New images:

  • TD 14.10/SLES 11/40GB/No viewpoint, can run on a host machine with 4GB of RAM.
  • TD 14.10/SLES 10/1TB/Viewpoint, requires a host machine with a minimum of 8GB of RAM.
     

A 64-bit virtualization-capable PC is required.  VMware provides a utility to check your system for 64 bit support at this link.


As before, 7-zip is required to uncompress TD Express images. Here

 

General details on the images remain the same as the original TD 14.0 and can be found here:

http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-now-available

Instructions for running the images are slightly changed and can be found here:

http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-user-guide

The new images are available at the down load page

Please note that while the Teradata Express family of products is not officially supported, you can talk to other users and get help in the Cloud Computing forum. Note also that Japanese-language instructions for configuring TDE-V are available for download in PDF format.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Extracting table and column statistics

AttachmentSize
Image may be NSFW.
Clik here to view.
TeradataColumnStatistics.zip
5.38 KB

Statistics provides valuable information to the optimizer to make decisions on generating optimal explain plan.  The information about the statistics can be obtained by using the “Help Statistics” command however the displayed data cannot be used to join with other tables.  The article provides a java stored procedure and DDL which extracts the output of "Help Statistics" at Table and Column level and capture the data into tables. 

The captured data in the tables helps with:

  1. Identifying old and stale statistics
  2. Understand the data demographics in more detail
  3. The data could be joined to DBQLObjTBL  for further analysis
  4. The data could be used to generate test data
  5. The data could be exported
  6. Better understand the explain plan there is a detailed explanation of how the statistics are used in Teradata Manual “SQL Request and Transaction Processing”

Environment

The code was built and tested on:

  • TD 13.10
  • Linux 64-bit
  • JDK 1.5

Installation & usage

  1. Three files are provided in TeradataColumnStatistics.zip
    1. TeradataColumnStatistics.java(Java code)
    2. TeradataColumnStatistics.sql  (DDL to hold the table and column statistics)
    3. TeradataColumnStatistics.jar(JAR file of the compiled Java code)
  2. Download the above files (always copy or move the TeradataColumnStatistics.jarfile in binary mode).
  3. Make sure that the user has privileges to call SQLJ.INSTALL_JARand CREATE EXTERNAL PROCEDURE.
  4. Create the two table provided in TeradataColumnStatistics.sql
  5. Use bteq to execute the below commands:
    CALL SQLJ.INSTALL_JAR('CJ!/tmp/TeradataColumnStatistics.jar', 'TeradataColumnStatistics', 0); 
    
    /** Where /tmp is the directory the TeradataColumnStatistics.jar file resides */ 
    
    REPLACE PROCEDURE TableColumnStatistics
    
    ( INOUT db VARCHAR(32),INOUT tab VARCHAR(32) )
    
    LANGUAGE JAVA MODIFIES SQL DATA
    
    PARAMETER STYLE JAVA
    
    EXTERNAL NAME 'TeradataColumnStatistics:TeradataColumnStatistics.main';
  6. To execute the JSP in bteq:
    CALL RETAIL.TableColumnStatistics('RETAIL','ITEM'); 
    
    /** RETAIL is the database name and ITEM is the table name */
  7. Check the tables Help_Col_Statsand Help_Tab_Statsfor the data contents.

The java code could also be compiled on the Linux box

$/opt/teradata/jvm64/jdk5/bin/javac TeradataColumnStatistics.java

$/opt/teradata/jvm64/jdk5/bin/jar -cf TeradataColumnStatistics.jar TeradataColumnStatistics.class

Teradata Manual SQL External Routine Programming; Chapter 5 - Java External Stored Procedures discusses more about building and using Java stored procedures

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata Database 14.10 Overview

Learn all about TD 14.10!

Learn about the new Optimizer enhancements, Incremental Planning and the anti-skew algorithm Partial-Redistribution and Partial-Duplication (PRPD), Teradata Intelligent Memory (TIM), Geospatial Indexing, Temporal Enhancements, Table Operators, SQL-H and Unity Source Link and other exciting capabilities. Teradata Database 14.10 represents the next major step in the evolution of data warehousing excellence.

Teradata Release Information: 14.10

Presenter: Rich Charucki, CTO Teradata Database – Teradata Corporation

Audience: 
Database Administrator, Application Developer
Training details
Course Number: 
50778
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
2
Channel: 

Row-level security

Short teaser: 
Use Teradata row-level security to restrict row-by-row data access to enhance site security.
Cover Image: 
Image may be NSFW.
Clik here to view.

Row-Level Security

This feature was introduced in Teradata Database 14.00.

Description

Teradata row-level security (RLS) allows you to restrict data access on a row-by-row basis in accordance with your site security policies. Row-level security policies can be used in addition to the standard GRANT privileges to provide a finer level of access control to table data.

RLS supports both hierarchical and non-hierarchical security schemes:

  • A hierarchical security scheme defines hierarchical security levels. Users with higher security levels automatically have access to rows protected by lower security levels.

    An example would be levels defined from higher to lower security as TOP SECRET, SECRET, CLASSIFIED, and UNCLASSIFIED. A user having a security level of SECRET would be able to access rows that are protected as SECRET, CLASSIFIED, and UNCLASSIFIED, but would not be able to access rows that are TOP SECRET.

  • A non-hierarchical security scheme defines distinct and unrelated security categories, also called compartments. Access granted to one type of protected row does not automatically allow access to rows protected with any other security category.

    An example would be categories for different countries: USA , CAN, UK, GER. A user having a security category of USA will only be able to access rows labeled as USA, and would not be allowed access to rows labeled CAN, UK, or GER that are not explicitly labeled as USA. (Your own custom policies determine whether, for example, a USA user could access a row labeled with both USA and CAN.)

More...

A CONSTRAINT object defines the RLS levels or categories for a specific security policy, and relates the policy to user-defined functions (UDFs) that enforce the actual row-level access controls. You write the UDFs yourself to enforce policies specific to your organization. You will generally write four UDFs, one for each of the four SQL statement types (SELECT, UPDATE, INSERT, and DELETE) that can be restricted at the row level by RLS. When an SQL request is submitted against a row in a table that includes a CONSTRAINT column, these UDFs are run automatically by Teradata Database to determine whether the user is allowed to access to the row.

You can assign security levels and categories to a user (or to a PROFILE) from up to six hierarchical and two non-hierarchical CONSTRAINT objects, supporting up to eight different security policies or criteria.

You can assign security levels and categories to the rows in a table by adding a CONSTRAINT column to the table and naming the column for an existing CONSTRAINT object. Populate the CONSTRAINT field of a row with values that identify the RLS access restriction level or category. Tables can include up to five CONSTRAINT columns to limit row access by up to five different sets of security criteria. Users must meet the appropriate criteria for all constraint columns in order to gain access to rows in the table. Views and some index types can also include security constraint columns.

RLS versus View-Based Security Techniques

Another form of row-level security can be implemented using views. In this technique, row-level access rules are part of view definitions. However, this type of row-level security is limited to filtering rows for SELECT operations. Other access to the base tables themselves can bypass the view-imposed row-level security.

Because RLS is applied at the level of base tables rather than views, the security policy is enforced on any method of accessing the base-table data and cannot be bypassed. RLS security policies can be enforced for SELECT, INSERT, UPDATE, and DELETE operations.

Benefits

  • RLS provides row-level access control to sensitive data. In addition to the normal object-level privileges required for data access, the system checks the user clearance for each operation on each row of data protected with RLS.
  • RLS provides mandatory controls on granting access privileges. Normally, owners of database objects automatically have the privileges to grant access to their owned objects to any other user. For objects protected by RLS, only users who are explicitly granted certain privileges can assign security constraints and other RLS access privileges to other users.

Considerations

  • In order to use RLS, you must:
    • Designate a system administrator to manage RLS, and grant them the required privileges.
    • Create the RLS infrastructure to support your system of security classifications. This involves writing security policy UDFs as scalar C language functions, and creating CONSTRAINT objects that reference those UDFs.
    • Create or alter tables to include constraint columns that enforce RLS.
    • Assign database users security levels or categories for RLS.
  • Because RLS involves row-level filtering, it adds processing overhead that results in some performance degradation, as compared to queries that do not involve such filtering. The amount of overhead is a function of several factors including the complexity of the C UDFs written to enforce RLS, which implement your site-specific security policies. The performance impact of using RLS is likely to be comparable to implementing a security policy of similar complexity using views and security table joins. To minimize the performance impact of the RLS policy UDFs, be sure to alter the functions to run in unprotected mode after they have been completely tested while running in the slower protected mode.

Scenario: Implementing Row-Level Security for a Table

This scenario implements a row-level security (RLS) scheme for an employee table in order to restrict access to the row data. Members of a Human Resources (HR) department are assigned security classifications based on their country. These classifications allow individual HR members to access only information for employees from their country.

The security scheme implemented in this scenario is non-hierarchical: HR employees who have access to employee data from one country do not automatically have access to data for employees of any other country.

The scenario demonstrates how to:

  • Grant an administrator (sysdba) privileges required to manage RLS.
  • Create an RLS infrastructure to support a system of security classifications. This involves the following steps:
    • Write UDFs as C code functions that implement security policies for SELECT, INSERT, UPDATE, and DELETE SQL operations.
    • Create a CONSTRAINT object that defines the security categories for this scenario, and points to the security policy UDFs.
    • Assign users different security categories, limiting the rows they can access in RLS-protected tables.
    • Create an employee table protected by RLS by including a constraint column in the table definition.

For More Information about Row-Level Security

For more information about RLS and the SQL used in these examples, see:
DocumentDescription
SQL Data Definition Language - Detailed Topics, B035-1184Provides information on creating CONSTRAINT objects and a discussion of row-level security.
SQL Data Definition Language - Syntax and Examples, B035-1144Shows SQL syntax for creating and altering CONSTRAINT objects.
SQL External Routine Programming, B035-1147Describes creating user-defined functions in external programming languages.
Security Administration, B035-1100Discusses row-level security implementation and contrasts with other security methods.
Orange Book: Teradata Row Level Security

Document #: 541-0005614

Detailed discussion and application examples of implementing and using RLS.
Orange Book: Teradata Database User Defined Function User's Guide

Document #: 541-0004361

Detailed discussion and application examples of creating and using UDFs.

Example: Grant Privileges to Manage RLS

The following SQL statements grant privileges that allow user sysdba to:
  • Create FUNCTION (UDF) and CONSTRAINT objects, which are used to implement RLS
  • Assign security categories to users and table rows, which determine the rows those users can access
  • Override aspects of RLS for purposes of system maintenance
.logon teradata_system_name/DBC,password

grant FUNCTION on syslib to sysdba with grant option;
grant CONSTRAINT DEFINITION to sysdba with grant option;

grant CONSTRAINT ASSIGNMENT to sysdba with grant option;

grant OVERRIDE SELECT CONSTRAINT on hr_db to sysdba;
grant OVERRIDE INSERT CONSTRAINT on hr_db to sysdba;
grant OVERRIDE UPDATE CONSTRAINT on hr_db to sysdba;
grant OVERRIDE DELETE CONSTRAINT on hr_db to sysdba;

Example: Create a SELECT Security Policy UDF

The following C code defines a scalar function invoked by the RLS security policy UDF for SELECT statements. It determines which rows of a table a SELECT request is allowed to access.

The SELECT policy function has three arguments, which are passed to the function by Teradata Database when a request involves a table protected by RLS:
  • UserCompartments holds the security category or categories assigned to the user.
  • RowCompartments holds the constraint column value from the row that is being evaluated to determine whether access will be allowed.
  • AccessAllowed is the result of the function that is returned. The return value determines whether access to the row is allowed for the SELECT request.
The function compares the first two argument values to determine whether access to the row is allowed and returns T or F. For our non-hierarchical RLS scheme in this scenario, the function enforces the Mandatory Access Control (MAC) no-read-up policy for SELECT security in the following way:
  • If the security categories of the user sending the request includes all categories set in the row's CONSTRAINT column, the SELECT is allowed. (For a hierarchical security scheme, the function would evaluate whether the security level of the user is the same or greater than the security level of the row.)
  • If the row has no security categories defined (the value of the CONSTRAINT column is NULL), the SELECT is allowed regardless of the user's categories.
// selectcompartment.c

#define SQL_TEXT Latin_Text
#include <sys/types.h>
#include "sqltypes_td.h"

typedef unsigned char byte;

void RLS_SelectCompartment(byte *UserCompartments,
	byte *RowCompartments,
	char *AccessAllowed,
	int *UserCompartments_i,
	int *RowCompartments_i,
	int *AccessAllowed_i)
{
	register i;

 	*AccessAllowed_i = 0;  // Never returns NULL result

	// Enforce no-read-up policy
       // User's compartments must dominate the row compartments

	// If row compartments are NULL, then SELECT is allowed
	if (*RowCompartments_i == -1) {
		*AccessAllowed = 'T';
		return;
	}

	// Row compartments are not NULL 
	// If user compartments are NULL, then SELECT is not allowed
	if (*UserCompartments_i == -1) {
		*AccessAllowed = 'F';
		return;
	}
	
	// User and row compartments are not NULL
	// Check for user compartments dominating row compartments
	for (i = 0; i < 2; i++) {
		if ((RowCompartments[i] ^ UserCompartments[i]) & RowCompartments[i]) {
			// SELECT is not allowed
			*AccessAllowed = 'F';
			return;
		}
	}
	
	// SELECT is allowed
	*AccessAllowed = 'T';
	return;
}

The following SQL code creates the SELECT policy UDF used for enforcing RLS. When the UDF is created, the C code is compiled.

REPLACE FUNCTION SYSLIB.RLS_SelectCompartment
  (CURRENT_SESSION byte(2), INPUT_ROW byte(2))
  RETURNS char(1)
  SPECIFIC SYSLIB.RLS_SelectCompartment
  LANGUAGE C
  DETERMINISTIC
  NO SQL
  PARAMETER STYLE SQL
  EXTERNAL NAME 'CS!RLS_SelectCompartment!C:\myC\RLS\selectcompartment.c';

The RLS_SelectCompartment UDF will be part of the CONSTRAINT object that is created for this RLS scenario.

Example: Create an INSERT Security Policy UDF

The following C code defines a scalar function invoked by the RLS security policy UDF for INSERT statements. It determines the value that is inserted into the CONSTRAINT column of a table protected with RLS.

The INSERT policy function has two arguments, which are passed to the function by Teradata Database when an INSERT request involves a table protected by RLS:
  • UserCompartments holds the security group or groups of the user.
  • NewRowCompartments holds the value returned by the function that will be assigned to the constraint column of the inserted row.

The function reads the security categories of the user who is inserting the row, and assigns them to the CONSTRAINT column of the newly inserted row. If the user has no assigned security categories, the row is inserted with a CONSTRAINT column value of NULL.

// insertcompartment.c

#define SQL_TEXT Latin_Text
#include <sys/types.h>
#include "sqltypes_td.h"

typedef unsigned char byte;

void RLS_InsertCompartment(byte *UserCompartments,
	byte *NewRowCompartments,
	int *UserCompartments_i,
	int *NewRowCompartments_i)
{
	register i;

       // Policy is to set row compartments to the user compartments 
       // specified for the session

	// If user/session compartments are NULL, then return NULL
	if (*UserCompartments_i == -1) {
		*NewRowCompartments_i = -1;
		return;
	}

	// User/session compartments are not NULL
	for (i = 0; i < 2; i++)
		NewRowCompartments[i] = UserCompartments[i];
	*NewRowCompartments_i = 0;
	return;
}
Note: The C code defines four arguments rather than two to allow for the possibility of NULL being passed to the UDF. Because this scenario uses a non-hierarchical security strategy, our CONSTRAINT object was defined as data type BYTE(2), which allows the security coding to be passed to the security functions. We also defined the CONSTRAINT to allow NULL values, for cases where users do not have an associated security group designation. Teradata Database passes NULL indicators as integer data types, so the function needs to allow for that by including integer arguments in addition to byte arguments.

The following SQL code creates the INSERT policy UDF used for enforcing RLS. When the UDF is created, the C code is compiled.

REPLACE FUNCTION SYSLIB.RLS_InsertCompartment
  (CURRENT_SESSION byte(2))
  RETURNS byte(2)
  SPECIFIC SYSLIB.RLS_InsertCompartment
  LANGUAGE C
  DETERMINISTIC
  NO SQL
  PARAMETER STYLE SQL
  EXTERNAL NAME 'CS!RLS_InsertCompartment!C:\myC\RLS\insertcompartment.c';

The RLS_InsertCompartment UDF will be part of the CONSTRAINT object that is created for this RLS scenario.

Example: Create an UPDATE Security Policy UDF

The following C code defines a scalar function invoked by the RLS security policy UDF for UPDATE statements. It determines which rows of a table an UPDATE request is allowed to access.

The UPDATE policy function has three arguments, which are passed to the function by Teradata Database when a request involves a table protected by RLS:
  • UserCompartments holds the security category or categories of the user.
  • RowCompartments is the constraint column value from the row that is being evaluated to determine whether access will be allowed.
  • NewRowCompartments is the result of the function that is returned. The constraint column value is replaced with this value in the row that is updated.

The function compares the first two argument values to determine whether access to the row is allowed. For this scenario, the function enforces the Mandatory Access Control (MAC) no-write-down policy for UPDATE security in the following way:

  • If the user security includes one or more RLS categories that are set for the row in the CONSTRAINT column, and does not include any categories that are not set for the row, the UPDATE is allowed.
  • If neither the user nor the row have any RLS categories set (that is, both are NULL), the UPDATE is allowed.
    Note: Although the policy is written to allow users without security labels to update NULL columns, UPDATE operations filter rows using the SELECT policy function before the UPDATE is applied. Because our SELECT function does not allow users with no security to SELECT rows, users having an RLS category of NULL will not be able to UPDATE NULL rows.
  • If the user has an assigned RLS category, but the row RLS category is NULL, the UPDATE is allowed.
  • If the user RLS category is NULL, but the row being evaluated has a non-NULL RLS category set, the UPDATE is not allowed.
  • If UPDATE is allowed, the CONSTRAINT column of the updated row is assigned an RLS category matching that of the user who performed the update.
// updatecompartment.c

#define SQL_TEXT Latin_Text
#include <sys/types.h>
#include "sqltypes_td.h"

typedef unsigned char byte;

void RLS_UpdateCompartment(byte *UserCompartments,
	      byte *RowCompartments,
	      byte *NewRowCompartments,
	      int *UserCompartments_i,
	      int *RowCompartments_i,
	      int *NewRowCompartments_i)
{
	      register i;

   // Enforce no-write-down policy
   // Row compartments must dominate the user's compartments

   // If user/session and row compartments are NULL, then 
   // UPDATE is allowed with NULL
	      if (*UserCompartments_i == -1) {
     		        if (*RowCompartments_i == -1) {
		                    	*NewRowCompartments_i = -1;  
                       // SET row compartments to NULL
	                    		return;
		             } else {
		             // User/session compartments are NULL 
               // but row compartments are not
       // UPDATE is not allowed
			              for (i = 0; i < 2; i++)
	                      			NewRowCompartments[i] = 0;
	              		*NewRowCompartments_i = 0;
	              		return;
		           }
	      }

 // If row compartments are NULL, then 
 // UPDATE is allowed with user/session compartments
	    if (*RowCompartments_i == -1) {
		          for (i = 0; i < 2; i++)
			                  NewRowCompartments[i] = UserCompartments[i];
	          	*NewRowCompartments_i = 0;
	          	return;
	    }

	    // Check for row compartments dominating 
     // user/session compartments
	    for (i = 0; i < 8; i++) {
		            if ((UserCompartments[i] & RowCompartments[i]) != RowCompartments[i]) {
			                   // UPDATE is not allowed
	                   		for (i = 0; i < 2; i++)
				                           NewRowCompartments[i] = 0;
			                   *NewRowCompartments_i = 0;
	                   		return;
		            }
	    }

	    // UPDATE is allowed with inclusive 
     // user/session and row compartments
	    for (i = 0; i < 2; i++)
	            	NewRowCompartments[i] = UserCompartments[i] | RowCompartments[i];
    	*NewRowCompartments_i = 0;
	    return;
}

The following SQL code creates the UPDATE policy UDF used for enforcing RLS. When the UDF is created, the C code is compiled.

REPLACE FUNCTION SYSLIB.RLS_UpdateCompartment
  (CURRENT_SESSION byte(2), INPUT_ROW byte(2))
  RETURNS byte(2)
  SPECIFIC SYSLIB.RLS_UpdateCompartment
  LANGUAGE C
  DETERMINISTIC
  NO SQL
  PARAMETER STYLE SQL
  EXTERNAL NAME 'CS!RLS_UpdateCompartment!C:\myC\RLS\updatecompartment.c';

The RLS_UpdateCompartment UDF will be part of the CONSTRAINT object that is created for this RLS scenario.

Example: Create a DELETE Security Policy UDF

The following C code defines a scalar function invoked by the RLS security policy UDF for DELETE statements. It determines whether the user is allowed to delete a row.

The DELETE policy function has two arguments, which are passed to the function by Teradata Database when an INSERT request involves a table protected by RLS:
  • RowCompartments is the value of the CONSTRAINT column of the row being evaluated for deletion. It codes for the security category or categories of the row.
  • DeleteAllowed is the value returned by the function that indicated if the DELETE operation on the row is allowed.

In our scenario, the row can be deleted only if it has no security categories (the value of the CONSTRAINT column for the row is NULL).

// deletecompartment.c

#define SQL_TEXT Latin_Text
#include <sys/types.h>
#include "sqltypes_td.h"

typedef unsigned char byte;

void RLS_DeleteCompartment(byte *RowCompartments,
	      char *DeleteAllowed,
	      int *RowCompartments_i,
	      int *DeleteAllowed_i)
{

   // Policy is that row can be deleted only if its compartments are NULL
   // If row compartments are NULL, then DELETE is allowed
	      if (*RowCompartments_i == -1)
		             *DeleteAllowed = 'T';
	      else
	      // DELETE is not allowed
		            *DeleteAllowed = 'F';

	      *DeleteAllowed_i = 0;
	      return;
}

The following SQL code creates the DELETE policy UDF used for enforcing RLS. When the UDF is created, the C code is compiled.

REPLACE FUNCTION SYSLIB.RLS_DeleteCompartment
  (INPUT_ROW byte(2))
  RETURNS char(1)
  SPECIFIC SYSLIB.RLS_DeleteCompartment
  LANGUAGE C
  DETERMINISTIC
  NO SQL
  PARAMETER STYLE SQL
  EXTERNAL NAME 'CS!RLS_DeleteCompartment!C:\myC\RLS\deletecompartment.c';

The RLS_DeleteCompartment UDF will be part of the CONSTRAINT object that is created for this RLS scenario.

Example: Create a CONSTRAINT Object

A CONSTRAINT object defines the security levels or groups used in an RLS scheme, and embodies the security functions used in that scheme.

The CONSTRAINT is referenced by:
  • a special column of RLS-protected tables, and used to restrict row access according to the security levels or groups defined in the CONSTRAINT object
  • CREATE USER statements to set the security levels or groups of users whose access to table rows will be limited

The following SQL statement creates a CONSTRAINT object that defines four non-hierarchical security groups based on country. It relates those security groups to the security UDFs that are used to enforce RLS. With a data type of BYTE(2), we can define up to 16 security levels, although we will only defined four here to make the example clearer. Non-hierarchical RLS schemes allow you to define CONSTRAINT objects having data types up to BYTE(32), which would allow up to 256 different security levels. (Hierarchical security level schemes use CONSTRAINT objects defined as data type SMALLINT, and can support up to 10,000 progressively restrictive security levels.)

CREATE CONSTRAINT RLS_IUDS_Country_Compartment BYTE(2),
       VALUES (US:1, UK:2, CAN:3, GER:4),
       INSERT SYSLIB.RLS_InsertCompartment,
       UPDATE SYSLIB.RLS_UpdateCompartment,
       DELETE SYSLIB.RLS_DeleteCompartment,
       SELECT SYSLIB.RLS_SelectCompartment;

Example: Create a Table Protected With RLS

The following SQL creates a table in the hr_db database that includes a CONSTRAINT column (RLS_IUDS_Country_Compartment) that is used to enforce RLS. Next, the example inserts five rows of data into the table to represent five employees who work in four countries.

DATABASE hr_db;
	
	CREATE TABLE t1, FALLBACK
	(
	employee_id INTEGER,
	employee_name CHARACTER(15),
	employee_location CHARACTER(3),
	RLS_IUDS_Country_Compartment CONSTRAINT
	);

	INSERT INTO t1 VALUES (1, 'John','UK','4000'xb)
	;INSERT INTO t1 VALUES (2, 'Mary','US','8000'xb)
	;INSERT INTO t1 VALUES (3, 'Adam','UK','4000'xb)
	;INSERT INTO t1 VALUES (4, 'Simon','CAN','2000'xb)
	;INSERT INTO t1 VALUES (5, 'Peter','GER','1000'xb)
        ;

In the CONSTRAINT object, every security category or level is assigned an associated numeric value. In our scenario, these categories and associated values correspond to a country. The two-byte hex value inserted into the CONSTRAINT column for each employee row represents the security category that a user must have in order to have access to the row. For more information on how these hex values are determined, see the Orange Book: Teradata Row Level Security, Document # 541-0005614.

HR database administrative users will be required to have a security group that corresponds to the CONSTRAINT value of these rows in order to perform certain SQL operations on the rows. Administrative users can be assigned to more than one security category to allow them access to data from more than a single country.

Example: Create Users That Have RLS Security Constraints

The following SQL creates three HR department database users. Notice that the CREATE USER statement includes the CONSTRAINT option, which assigns each user to one or more RLS security categories:
  • hr_corp is assigned to security categories that allow access to rows designated as containing information for employees in all four countries.
  • hr_americas is assigned to security categories that allow access only to rows designated as containing information for employees from the US and Canada.
  • hr_europe is assigned to security categories that allow access only to rows designated as containing information for employees from the UK and Germany.
Note: Assigning multiple categories to a user is allowed for non-hierarchical RLS schemes.

The GRANT statements allow the users full access to the objects they create and general DML access to objects in the hr_corp database. These statements are not related to RLS.

create user hr_corp from sysdba as
    perm = 0
   ,password = hr_corp
   ,default database = hr_db
   ,account = 'hr_corp'
   ,spool =100000
   ,no fallback
   ,no before journal
   ,no after journal
   ,CONSTRAINT = RLS_IUDS_Country_Compartment (US, UK, CAN, GER)
;

grant all on hr_corp to hr_corp with grant option;
grant SELECT, INSERT, UPDATE, DELETE on hr_db to hr_corp with grant option;

create user hr_americas from hr_corp as
    perm = 0
   ,password = hr_americas
   ,default database = hr_db
   ,account = 'hr_americas'
   ,spool = 100000
   ,no fallback
   ,no before journal
   ,no after journal
   ,CONSTRAINT = RLS_IUDS_Country_Compartment (US, CAN)
;

grant all on hr_americas to hr_americas;
grant SELECT, UPDATE on hr_db to hr_americas;

create user hr_europe from hr_corp as
    perm = 0
   ,password = hr_europe
   ,default database = hr_db
   ,account = 'hr_europe'
   ,spool = 100000
   ,no fallback
   ,no before journal
   ,no after journal
   ,CONSTRAINT = RLS_IUDS_Country_Compartment (UK, GER)

;

grant all on hr_europe to hr_europe;
grant SELECT, UPDATE on hr_db to hr_europe;

Examples: Accessing Rows in a Table Protected With RLS

User hr_corp can view all rows labeled as having information restricted to US, UK, CAN, or GER RLS categories. Those are the rows that contain the binary representation of US, UK, CAN, or GER categories in the CONSTRAINT column (RLS_IUDS_Country_Compartment).

.logon iedbc/hr_corp,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          5 Peter           GER               1000
          4 Simon           CAN               2000
          3 Adam            UK                4000
          1 John            UK                4000
          2 Mary            US                8000
+---------+---------+---------+---------+---------+---------+---------+----

User hr_americas can only view rows labeled for US, CAN, or both.

.logon iedbc/hr_americas,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          2 Mary            US                8000
          4 Simon           CAN               2000
+---------+---------+---------+---------+---------+---------+---------+----

User hr_europe can view only rows labeled for UK, GER, or both.

.logon iedbc/hr_europe,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          5 Peter           GER               1000
          3 Adam            UK                4000
          1 John            UK                4000
+---------+---------+---------+---------+---------+---------+---------+----

User hr_corp changes his RLS session info from US, UK, CAN, and GER to US only. The INSERT policy will use the session info to set the CONSTRAINT column value for the new row. If this was not done the CONSTRAINT column would be loaded with ‘F000’xb instead of ‘8000’xb, where ‘F000’xb represents the combined binary values of US, UK, CAN, and GER. Only sysdba and hr_corp could view the new row.

.logon iedbc/hr_corp,

SET SESSION CONSTRAINT = RLS_IUDS_Country_Compartment (US);

 *** Set SESSION accepted. 
+---------+---------+---------+---------+---------+---------+---------+----

INSERT INTO t1 VALUES (6, 'Bob','US',);

 *** Insert completed. One row added. 
+---------+---------+---------+---------+---------+---------+---------+----

Because the session constraint was set to US, in this session hr_corp can view only rows with US in the security compartment.

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          2 Mary            US                8000
          6 Bob             US                8000
+---------+---------+---------+---------+---------+---------+---------+----

With a new logon hr_corp can again view rows with any of the country categories in the CONSTRAINT column.

.logon iedbc/hr_corp,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          5 Peter           GER               1000
          4 Simon           CAN               2000
          6 Bob             US                8000
          3 Adam            UK                4000
          1 John            UK                4000
          2 Mary            US                8000
+---------+---------+---------+---------+---------+---------+---------+----

.logon iedbc/hr_americas,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          2 Mary            US                8000
          4 Simon           CAN               2000
          6 Bob             US                8000
+---------+---------+---------+---------+---------+---------+---------+----

.logon iedbc/hr_europe,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          5 Peter           GER               1000
          3 Adam            UK                4000
          1 John            UK                4000
+---------+---------+---------+---------+---------+---------+---------+----

hr_americas has UPDATE rights on t1 and can change any column except the CONSTRAINT column.

.logon iedbc/hr_americas,

UPDATE t1 SET employee_name = 'Robert' WHERE employee_id = 6;

 *** Update completed. One row changed. 
+---------+---------+---------+---------+---------+---------+---------+----

sysdba has all OVERRIDE CONSTRAINT rights and can set the CONSTRAINT column to NULL. The row is now visible to all users and can be deleted.

.logon iedbc/sysdba,

UPDATE hr_db.t1 SET RLS_IUDS_Country_Compartment = NULL WHERE employee_id =
 6;

 *** Update completed. One row changed. 
+---------+---------+---------+---------+---------+---------+---------+----

.logon iedbc/hr_europe,

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          5 Peter           GER               1000
          6 Robert          US                ?
          3 Adam            UK                4000
          1 John            UK                4000
+---------+---------+---------+---------+---------+---------+---------+----

hr_corp has DELETE rights on t1 and can DELETE the row now that the CONSTRAINT column is NULL.

.logon iedbc/hr_corp,

DELETE t1 WHERE employee_id = 6;

 *** Delete completed. One row removed. 
 *** Total elapsed time was 1 second.

+---------+---------+---------+---------+---------+---------+---------+----

SELECT * from t1;

employee_id employee_name   employee_location RLS_IUDS_Country_Compartment
----------- --------------- ----------------- ----------------------------
          5 Peter           GER               1000
          4 Simon           CAN               2000
          3 Adam            UK                4000
          1 John            UK                4000
          2 Mary            US                8000

+---------+---------+---------+---------+---------+---------+---------+----
Supersede
Ignore ancestor settings: 
0
Apply supersede status to children: 
0
Channel: 

Unity Source Link Scenario

Short teaser: 
Use Unity Source Link to import and query foreign data from Teradata Database.
Cover Image: 
Image may be NSFW.
Clik here to view.

Overview of Unity Source Link

Unity Source Link (USL) allows reads and joins from an external or foreign database to Teradata Database without requiring data replication in the Teradata data warehouse. USL can be used to:
  • create heterogeneous joins for reporting
  • insert or select from a foreign database into Teradata Database
  • migrate data to Teradata Database
Note: USL currently supports only importing data from Oracle databases. Complex data types, such as large object (LOB), user-defined type (UDT), period, and interval are not supported.
More...
The figure below illustrates the system hardware and software components.
  • The USL portal server is a dedicated Teradata Managed Server (TMS) with Linux SLES 11. Your system can have more than one USL portal server, with a single portal on each server.
  • USL portal software connects Teradata Database to one or more foreign databases.

Image may be NSFW.
Clik here to view.
The USL software components run on the USL portal server and on Teradata Database nodes:
  • The USL portal software on the TMS server acts as a gateway between Teradata Database and the foreign database. The USL portal software uses ODBC to access the foreign database. It handles data conversion between the foreign data type and the Teradata data type.
  • External Access Handler (EAH) software resides in Teradata Database in a dedicated database named DBCForeign. EAH has three sub-components: stored procedures, the import table operator, and tables
    • Stored procedures are the interface for administrative and end users to Teradata Database.
    • The tables track foreign database connections in DBCForeign database.
    • The import table operator runs in a java virtual machine on each node in Teradata Database and communicates through a socket connection with the USL portal server.

Unity Source Link Workflow

Unity Source Link is implemented and used by two distinct roles.
  1. The Unity Source Link administrative user:
    1. Calls the CreatePortal stored procedure to define a portal in DBCForeign, including the name, IP address, port, and connection timeout information.

      The portal entry is stored in DBCForeign.Portals table on Teradata Database and views are created for accessing statistics, requests, and session information for the portal.

    2. Calls the CreateServer stored procedure to define servers in DBCForeign, including the server name, user name, password, portal name, server data source name (DSN), and data source.

      The server row is stored in the DBCForeign.Servers table on Teradata Database.

    3. Configures access in DBCForeign.ServerUserMapping table on Teradata Database to specify which users or roles have access to which server and informs users which server names are mapped for their access.
    4. Grants privileges for EAH stored procedures:
      • USL administrators need all privileges on DBCForeign
      • USL users need EXECUTE PROCEDURE privileges for CreateLink and DeleteLink stored procedures
  2. The Unity Source Link end user:
    1. Calls the CreateLink stored procedure to define views to foreign tables so they can be queried from any SQL client, such as SQLA and BTEQ.

      An entry for tracking each view is created in the DBCForeign.Links table on Teradata Database with "LinkName@ServerName" as the view name.

      The view is defined as a query from the import table operator. Table operators accept a table or table expression as input and generate a table as output. They are a type of user-defined function (UDF). Table operators provide a simplified in-database MapReduce-style programming model. Table operators allow output column definitions to be dynamically determined at runtime. In USL, the output column definitions are determined based on the column definitions of the foreign table or the result table of the foreign table expression at query time.

    2. Queries the views created by the stored procedure.
      • When querying the views, any select, filtering, ordering, grouping, operations, or functions that are part of the view definition are executed by the foreign SQL database engine.
      • When querying the views, any select, filtering, ordering, grouping, operations, functions, or joins that are outside of the view definition are executed by the Teradata Database engine.

USL Performance Considerations

Because USL links heterogenous systems, overall performance is affected by processes taking place on the USL Portal Server, on Teradata Database, and on the foreign database.

USL Portal Server and Performance

In addition to connecting Teradata Database to a foreign database, one of the main functions of the USL Portal Server is to process any required data conversions between the two systems. This means Teradata Database is relieved of the burden of data conversions, aiding in overall system performance when processing data from the foreign database.

Database Execution

USL uses two different SQL database engines, making it important to understand which portions of a query are executed by which database engine.

The CreateLink stored procedure creates a view named "LinkName@ServerName", and subsequent query execution depends on the view definition.
  • When querying the views, any select, filtering, ordering, grouping, operations, or functions that are part of the view definition are executed by the foreign SQL database engine.
  • When querying the views, any select, filtering, ordering, grouping, operations, functions, or joins that are outside of the view definition are executed by the Teradata Database engine.

Identifying High-Value Customers Across Multiple Sites

This scenario illustrates a heterogeneous join.

A company owns multiple casino properties. One casino (C1) is in Las Vegas and the other (C2) is in Atlantic City. A customer walks into the Las Vegas property and uses their loyalty card. The management wants to know the customer's status but they only have activity data on their customers. They query the activity data from other casino properties as well as their own. The problem is that the different casino properties are somewhat independent and use different database vendors. Las Vegas uses Teradata Database and Atlantic City uses a foreign database. In this simple scenario they are looking for the customer's last activity date and total winnings in all past visits.

For More Information about Unity Source Link

For more information about Unity Source Link and about the SQL used in these examples, see:

Document Description
Unity Source Link User Guide, B035-2601 Describes setup, administration, and use of Unity Source Link.
SQL Functions, Operators, Expressions, and Predicates, B035-1145 Describes the UNION operator and other functions, operators, expressions and predicates.
SQL Data Manipulation Language, B035-1146 Describes the CALL statement, used for invoking stored procedures, and other DML statements.
SQL Data Definition Language - Syntax and Examples, B035-1144 Describes the SHOW VIEW statement, and other DDL statements.

Scenario Assumptions

This scenario assumes the following tasks have already been performed by the system and Unity Source Link administrators:

  • The External Access Handler software has been installed on Teradata Database.
  • Appropriate access rights to DBCForeign have been granted to the Unity Source Link Administrator, and the USL administrator has granted appropriate access rights to users who require access to USL.
  • The Unity Source Link Portal Server service has been started and configured.
  • A portal that defines the Teradata Database connection to the foreign database has been created. The portal must use the same port number as that used by the service (the default is port 5678).
  • ODBC data sources have been created on the Unity Source Link Portal Server (a Teradata Managed Server) to access the foreign database. In our scenario, the foreign database is Oracle.
  • Teradata Wallet has been configured to store and protect the Oracle Database passwords in an encrypted form.

Our scenario will show how a Teradata Database user with appropriate privileges can access the data in both the Teradata and Oracle databases to answer questions about casino guests.

Assume the following customer data exists in the Teradata database table c1 for the Las Vegas casino:

Cust_ID Activity_DT Tot_Dollars
1 2012-10-15 1000.00
1 2012-10-16 - 500.00
1 2012-11-01 2000.00
3 2012-12-30 -1000.00

Assume the following customer data exists in the Oracle database table Datatypes.c2 for the Atlantic City casino:

Cust_ID Activity_DT Tot_Dollars
2 2012-10-15 1000.00
2 2012-11-15 -500.00
3 2013-01-01 5000.00
3 2013-01-02 -2000.00
3 2012-01-01 -500.00

Links to Foreign Databases

USL allows you to create links to foreign databases. These links are views in DBCForeign that you can query the same way that you query tables and views in Teradata Database.

Creating a Link

When a link is created, a row is added to the DBCForeign.Links table. This row contains the characteristics of the link. During the execution of the CreateLink stored procedure, a view is created in the form of "LinkName@ServerName".
  1. Use the CreateLink command to define a link in Teradata Database to a table in a foreign database to create a view for this table in the DBCForeign database. Example:
    CALL DBCForeign.CreateLink('LinkName', 'TableName', 'ServerName', 'SelectList', 'Clause', outStatus);
    Option Description
    LinkName

    Name of the link. The view that gets created takes the form of "LinkName@ServerName'".

    TableName

    Name of the foreign table to be accessed, up to 128 characters.

    ServerName

    Name of a server that was previously created using the CreateServer stored procedure. A list of the created servers can be obtained by querying the DBCForeign.Servers table.

    SelectList
    [Optional] List of foreign table columns to be selected. If no select list is specified, all foreign table columns are selected. The list is not limited to a simple list of columns. SQL must be well-formed. It can contain the following operations and functions:
    • scalar operations, such as c1 + 10
    • aggregate functions
    • non-aggregate columns, must appear in a GROUP BY clause
    • user defined functions
    Clause
    [Optional] SQL clause to specify filtering, grouping, or ordering operations that are performed on the table. If no clause is specified, no filtering, grouping, or ordering operations are performed on the table. The WHERE keyword is not needed if the clause only contains a WHERE clause.
    • c3 > 10
    • c3 > 10 GROUP BY c1
    • c3 > 10 GROUP BY c1
    • GROUP BY c1
    outStatus

    Returns a message indicating the stored procedure executed successfully or an error occurred.

Example: Creating a Link

  1. Login to the USL portal server and use a SQL client to create the link.
    CALL DBCForeign.CreateLink('c2Link2', 'Datatypes.c2', 
    'Connect2Oracle', 'Cust_ID, MAX(Activity_DT) dt, 
    SUM(Tot_Dollars) td', 'GROUP BY Cust_ID', st);
    
    *** Procedure has been executed. 
    *** Total elapsed time was 3 seconds.
    
    
    outStatus
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    [USL7109] OK! User =  DATATYPES_USER1, Select List = "Cust_ID, MAX(Activity_DT) dt, SUM(Tot_Dollars) td", Clause = "GROUP BY Cust_ID", View = "c2Link2@Connect2Oracle"
    
    SHOW VIEW DBCForeign."C2Link2@Connect2Oracle";
    
    *** Text of DDL statement returned. 
     *** Total elapsed time was 1 second.
    
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    CREATE VIEW "DBCForeign"."c2Link2@Connect2Oracle" AS (SELECT * FROM "DBCForeign"."Import" (
    USING 
    I_TABLE('Datatypes.c2') 
    I_SQL('SELECT Cust_ID, MAX(Activity_DT) dt, SUM(Tot_Dollars) td 
    FROM Datatypes.c2 
    GROUP BY Cust_ID')
    I_PORTAL_IP('153.64.25.217')
    I_PORT( 5005) 
    I_DATASOURCE('oracle') 
    I_DSN('OracleWP') 
    I_TIMEOUT(    20) 
    I_UID('datatypes_user1') 
    I_PWD('$tdwallet')) AS I1)
    

Example: Querying a View

Query the activity data from other casino properties together with local data for the customers' last activity date and their total past winnings.
  1. Run a query.
    SELECT Cust_ID (TITLE 'Customer'),
           MAX(dt) (TITLE 'Last Visit'),
           SUM(td) (TITLE 'Winnings')
    FROM (SELECT Cust_ID, MAX(Activity_DT) dt, SUM(Tot_Dollars) td
          FROM c1 
          GROUP BY 1
          UNION
          SELECT Cust_ID, dt, td
          FROM DBCforeign."c2Link2@Connect2Oracle" 
         ) c
    GROUP BY 1
    WHERE Cust_ID = 3;
    
    The new query returns one row:
      Customer  Last Visit  Winnings
    -----------  ----------  --------
              3    13/01/02   1500.00
    

Discussion of the USL Example Query

Notice that the query selects information from both the Teradata database table holding the Las Vegas customer information and the view of the Oracle database table data holding the Atlantic City customer information. The necessary data transformation is taken care of automatically by USL so the information can be combined and manipulated seamlessly by Teradata Database.

The SELECT statement immediately above the UNION collects the Las Vegas data in a table c1. That SELECT statement aggregates the table data per customer to show the most recent casino visit date and the sum of all the money won or lost by the customer up to and including that date. By itself, that SELECT would return this data:

Cust_ID dt td
3 12/12/30 -1000.00
1 12/11/01 2500.00

The SELECT statement immediately after the UNION queries the view of the Oracle data. This view was created in Teradata Database. Because the link statement specified similar aggregations on the Oracle data, this SELECT statement would return data organized in similar columns to the data above:

Cust_ID dt td
2 12/11/15 5000.00
3 13/01/02 2500.00

That data in the original Oracle table might have looked like this before USL transformed it to match the data formats used in Teradata Database:

CUST_ID DT 
2. 2012-11-15 00:00:00 5.000
3. 2013-01-02 00:00:00 2.500

The complete query that combines data from both the Las Vegas and Atlantic City casinos uses a WHERE clause to limit the data to the customer of interest, and mathematically sums the customer's winnings from both casinos to present the overall grand total amount won or lost by that customer at both casino locations:

Customer Last Visit Winnings
3 13/01/02 1500.00
Supersede
Ignore ancestor settings: 
0
Apply supersede status to children: 
0
Channel: 
Viewing all 174 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>