Database Administrators Asked by yellephen on December 28, 2020
have a logging table that contains references to other objects in other tables. I want to create a view to add some additional data to the logging table by looking back at the other objects.
For example, I have
Logging Table
| DateUpdated | Table | UID | Column | oldValue |
--------------------------------------------------------------
| 01/01/2019 | Person | 555-666-777 | FirstName | Joe |
| 02/01/2019 | Jobs | 444-777-654 | Machine | Forklift |
What I want to do in the view, is add a column, currentValue, that looks up the [column] from [table] where uid = [UID]. So the currentValue column needs to be loaded with a dynamic query from any table. Is there a way to do this? I can explain it further if neccessary.
Quote this
Precisely because this approach lends itself to complicated rules for building queries dynamically, I use a different approach.
I see you are logging changes to tables named person
and jobs
. Below is a summary of my approach for implementing a standard logging methodology.
Create an audit table for each table where you want to log changes. You could do this using a schema specifically for auditing (eg. audit.person
and audit.jobs
) or you could use a standard table name suffix or prefix (eg. person_audit
, jobs_audit
).
Next, I use some standard columns in the audit table:
id
transaction_type
transaction_time
transaction_user
Here, "id
" represents the primary key ID of the table you are auditing - so in fact you might have multiple columns, if the table you are auditing has a composite primary key. I typically use exactly the same column name as used in the data table. So for example, if your persons
primary key column was named person_id
then in the audit table I would specify person_id
as the first column. (It is tempting to standardise all audit tables to have a primary key named id
but that will fail when you have to audit a table having a composite primary key).
The transaction_type
field will log one of three values:
The transaction_time
field will have a timestamp or datetime or similar - basically you want the date and time down to the millisecond. It will, of course, be logging the time of the data change.
If your RDBMS and application combination supports it, include a transaction_user
field to log the application username (or DB username, if appropriate) of the user which carried out the data modification.
Next, on the table to be audited, add some triggers:
You might tweak the before/after clauses depending on your needs, but the intention here is clear: we will record the relevant values on each of those events. The trigger is before delete, not after, so that you still have the data values being deleted.
Within each trigger we can do a few things, actually.
In other words, using this method it becomes really simple, trivial, and standard to add a new field to the auditing mechanism - simply follow the steps above.
What needs to be clear, here, to the business is that not every change is being audited - only changes that meet whatever criteria you stipulate within the triggers. So for example, when users modify fields that are not being audited, of course there is no audit record created. This means, for example, you can't look at your audit and say "the data record was last modified at XYZ time" - it may well have been modified after your last audit record, but not audited.
Some of the benefits of this approach include:
In addition to the above, consider adding a few standard columns to your data tables themselves, if these are useful to you:
These are also maintained by appropriate triggers on the data table (after insert, and after update) or even by column value default in the column definition (ie. in particular date_created
). I find them useful, but not as informative as a full audit. They will, however, show you whether a record was modified after the last fully audited change (ie. if the date_last_modified > max(audit_table.transaction_date) then something changed on the table that was not audited).
Feel free to ask questions and I can keep improving the answer to address specifics.
Finally, to implement your specific use case, it depends on how you'd like to report this now. For example, your example layout seems to be showing "old value" and "current value" on one row. Of course, if a row has changed multiple times, you will see all the "old values" on separate rows, and each row will show you the current value (ie. same value on each row in that field).
To do this, simply write a query to union
two select
s:
select
audit.person.transaction_date as DateUpdated,
'Person' as tableName, -- Hard-coded
audit.person.transaction_user as UID,
'FirstName' as columnName, -- Hard-coded
audit.person.FirstName as oldName,
(select person.FirstName from person where person.person_id = audit.person.person_id) as currentValue
union all
-- and similar for jobs table
order by DateUpdated -- if your RDBMS supports referencing aliases in the order by clause. Otherwise, if Microsoft, then use a CTE
This would indeed be a tedious query as you continue to add specific tables and columns.
The alternative is to work with the new design:
select * from audit.person order by transaction_date;
Much simpler and the last row contains the current values, of course.
Answered by youcantryreachingme on December 28, 2020
Possible solution (just to laugh).
SELECT L.*,
CASE L.Table WHEN 'Person'
THEN CASE L.Column WHEN 'FirstName'
THEN P.FirstName
WHEN 'LastName'
THEN P.LastName
-- .....
END
WHEN 'Jobs'
THEN CASE L.Column WHEN 'Machine'
THEN J.Machine
WHEN 'Education'
THEN J.Education
-- .....
END
-- .....
END ActualValue
FROM LoggingTable L
LEFT JOIN Person P ON L.Table = 'Person' AND L.UID = P.UID
LEFT JOIN Jobs J ON L.Table = 'Person' AND L.UID = J.UID
-- .....
Answered by Akina on December 28, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP