[go: nahoru, domu]

One of the most used services in the DFP API is the LineItemService. Many of you are already utilizing the Line_Item table in the PublisherQueryLanguageService to create match tables on fields like Status or ExternalId, but with newer API versions, more and more fields are available as columns. Did you know that as of v201411 the Line_Item table includes a column for Targeting? With so many line item fields now accessible through PQL, the Line_Item table might be a viable replacement for your read operations.

What's the advantage? Faster response times. As an example, I pulled 5,000 line items from a network using both the LineItemService and the Line_Item PQL Table, printing page offsets as the results arrived. Take a look at the results:

* Actual response times may vary. Line item fields only available in participating PQL Tables.

Using the PublisherQueryLanguageService shaved off 17 seconds for a respectable speed increase of 15%.

However, if your application doesn't need some of the heavier fields, you'll see a much bigger gain. Check out what happens when we leave out Targeting:

The sparse selection offered by the PublisherQueryLangugeService means our data size is smaller, cutting the total time by a whopping 45%!

If you're looking for a performance boost in your LineItem read operations, give the Line_Item table a try. We've got example code in each of our client libraries to get you started. If you have any questions, don't hesitate to reach out to us on our API forums.

In the recent DFP API releases, we announced the addition of more tables to the PublisherQueryLanguageService, starting with Line_Item and Ad_Unit. These tables are an alternative to retrieving entities from their respective services’ get***ByStatement methods. They allow you to retrieve sparse entities containing only the fields you’re interested in. For example, the following select statement retrieves the first page of only the ID and name of line items that are missing creatives.
SELECT Id, Name from Line_Item WHERE IsMissingCreatives = true LIMIT 500 OFFSET 0
In this blog post, we’ll go over some situations where this feature can be utilized to speed up entity retrieval times from hours to minutes.

Entity synchronization


The first major use case that benefits from these new tables is entity synchronization. For example, if you’re synchronizing line items on your network into a local database, you’re most likely using LineItemService.getLineItemsByStatement and hopefully taking advantage of the LineItem.lastModifiedDateTime field to only filter out line items that have changed since the last time you synchronized. But even with lastModifiedDateTime, this synchronization can still take a while, depending on how many line items you have on your network, and how complex their targetings are. If you don’t need to synchronize all the fields in your line item objects, you may be able to use the Line_Item PQL table to perform this synchronization instead.

If you do need to synchronize fields not yet available in the Line_Item table, such as targeting, you can still take advantage of this table for computed fields that don’t affect lastModifiedDateTime, such as LineItem.status. What you can do is synchronize your line items as usual with getLineItemsByStatement filtering on lastModifiedDateTime. Then update your local statuses with selected line item statuses from the Line_Item table (a very quick process):
SELECT Id, Status from Line_Item LIMIT 500 OFFSET 0

Match tables for reports


Local copies of line item information can also be used as match tables to construct more detailed reports. Sometimes, you may want more information in your reports than what is currently available as a dimensionAttribute. For example, if you run a report by line item ID, you may also want other line item information like isMissingCreatives to show in the report. Because LineItem.isMissingCreatives is unavailable as a DimensionAttribute, you can create a local match table containing line item IDs and additional columns to be included in the report. Then you can merge this match table with the report by the line item ID to obtain a report with those additional columns.

For example, let’s say you run a report with the following configuration:
Dimension.LINE_ITEM_ID
DimensionAttribute.LINE_ITEM_COST_TYPE
Column.AD_SERVER_IMPRESSIONS
The report in CSV_DUMP format looks something like this:
Dimension.LINE_ITEM_ID, DimensionAttribute.LINE_ITEM_COST_TYPE,
    Column.AD_SERVER_IMPRESSIONS
1234567, CPM, 206
1234568, CPD, 45
1234569, CPD, 4
To also include LineItem.isMissingCreatives in the report, you would fetch a match table and save it (as a CSV file for example) by retrieving ID and isMissingCreatives from the Line_Item table.
SELECT Id, IsMissingCreatives from Line_Item LIMIT 500 OFFSET 0
Full examples of how to fetch match tables are available in all our client libraries. For instance, Python’s is here. Then using a script or a spreadsheet program, merge the match table with the report to produce something like this:
Dimension.LINE_ITEM_ID, DimensionAttribute.LINE_ITEM_COST_TYPE,
    Column.AD_SERVER_IMPRESSIONS, LineItem.isMissingCreatives
1234567, CPM, 206, true
1234568, CPD, 45, false
1234569, CPD, 4, false
If you have any questions on these new PQL tables, or suggestions on what PQL tables you want in the next release, please let us know on the API forum, or on our Google+ Developers page.

As your networks grow, so does their data in the DFP servers. While previously making requests for tens of line items, you now find yourself requesting tens of thousands of line items. Of course, with more data comes more responsibility - your requests are now taking longer and the response sizes have increased accordingly. You notice that some of your requests are now returning with 'ServerError.SERVER_ERROR.' Things might seem hopeless, but don’t panic...

Many of these problems can be solved with pagination! What does this mean from a developer's perspective? In a large number of implementations, what we've noticed is that applications will make requests with empty statements to calls like this:
getCreativesByStatement(" ")
getLineItemsByStatement(" ")
getOrdersByStatement(" ")
getCustomTargetingValuesByStatement(" ")
These requests do not limit the size of the returned result set. In doing so, the applications are asking for the data of every single object belonging to that service. When you’re talking about thousands of line items, each with their own distinct custom targeting, the amount of data will often cause the request to fail.

The fix? When creating PQL statements to query for DFP objects, you’ll find our client libraries all utilize a recommended page size (500) to limit your queries to smaller batches using the 'LIMIT' keyword, which should feel familiar for most who've used SQL. After the first page has returned successfully, you can then use the 'OFFSET' keyword to retrieve each subsequent page until your request returns nothing. If the calls still seem to take a long time to return a page or fail at this point, you can try to use a smaller page size.

If you use pagination to retrieve data, you not only get the benefit of increased reliability, but also protect yourself should something go wrong. Instead of retrying the entire request from the start again, you can simply pick up where you left off.

To see how to implement pagination logic, you can find examples in each of our client libraries:
Ruby
Java
PHP
Python
Dotnet
If you have any questions on using pagination with your queries, post them on the API forum or Google+ Developers page.

 - , DFP API Team

Today we are launching v201308 of the DFP API, which brings many new and exciting features and a glimpse of the API’s future. This release improves report stability, offers a brand new way to fetch line items through a Publisher Query Language table, the ability to create first party audience segments, and the ability to see contending line item in forecasts. A detailed list of these features and what’s changed can be found on our release notes page.

Reporting

First off, we heard you loud and clear - reports are very important to you and when a report fails for no apparent reason, it’s incredibly frustrating. Starting today, we are taking major steps towards our goal to fix this. You’ll now notice that large reports, which would otherwise time out or fail with a 502 HTTP status code while fetching the download URL, will now spend more time preparing the report in the runReportJob stage. Some reports may still be too large to run, but any report that runs in the UI will now work via the API as well. We've also made this change behind the scenes, so you’ll start seeing improvements right away without having to switch to v201308. While we know there is still more work to be done, we hope this is a clear sign that we take this issue seriously and are working hard to improve it.

In addition to stability improvements, in v201308, we are launching two highly requested features: targeted criteria reporting and ActiveView (a.k.a. viewability metrics) columns. These features are not available in all networks yet, but you or your third-party will be able to use them as soon as they are rolled out, if your network is eligible.

Publisher Query Language

We are launching two major PQL features today - the LIKE keyword and the Line_Item table, both of which will be made available in all versions.

The LIKE keyword allows you to do wildcard matching for fields. For example, if you pass the filter statement “WHERE Name LIKE 'my order%'” to the OrderService.getOrdersByStatement method, it will match all orders that have a name beginning with ‘my order’ (like ‘my order 1’, ‘my order 2’ and ‘my orders’).

The other exciting feature of this release is the experimental Line_Item table. With this new table, you’ll be able to select only the fields you want for line items using the PublisherQueryLanguageService. For example:

SELECT Id, Name FROM Line_Item WHERE IsMissingCreatives = TRUE LIMIT 1000 OFFSET 0

This allows for extremely efficient synchronization; tasks that would take hours with LineItemService, will now take minutes. We think this will be a great fit for pulling “match tables” and we’ll have a follow-up blog post soon about how to do this. Although we are launching this with a limited set of fields, we have made it a priority to add more in upcoming releases and we’d love to hear your feedback on our forum or Ads Developer Google+ page. If you want to get started playing with these new features now, you can always visit the dfp-playground. Try using the Publisher Query Language section with a query like “SELECT Id, Name FROM Line_Item WHERE name LIKE 'Line Item #%' LIMIT 100”.

Last, but not least

Starting in v201308, we are adding support for creating first party audience segments with the AudienceSegmentService as well as retrieving contending line items with the ForecastService. We know the latter has been a long time coming, so we are looking forward to any feedback.

As always, if you have any suggestions or questions about the new version, feel free to drop us a line on our Ads Developer Google+ page.


 - , DFP API Team