Because SQL within FileMaker requires learning at least a subset of the SQL-92 standard a developer is moving beyond FileMaker's list of functions.

This page is currently a placeholder for the discussion of adopting a best practice regarding dealing with using SQL within FileMaker calculation.

Further discussion about this topic can be found on the Google Groups discussion about ExecuteSQL

Current proposed snippet which uses Substitution method (similar to printf)

 

Let ( [ ~sql = "
	SELECT t1.~field
	FROM ~table1 t1
	JOIN ~table2 t2
	ON t1.~field = t2.~field
	WHERE ~field LIKE '%~value%'
	AND ~field=?
	ORDER BY ~field";
 
	$sqlQuery = Substitute ( ~sql ;
		[ "~table1" ; SQLTableName ( Table1::fieldName ) ];
		[ "~table2" ; SQLTableName ( Table2::fieldName ) ];
		[ "~field" ; SQLFieldName ( Table1::fieldName ) ];
		[ "~value" ; Table::field ]
	);
 
	$sqlResult = ExecuteSQL ( $sqlQuery ; "" ; "" ;
    	$value;
    	$value[2];
    	$value[$n]
	)
];
	//Substitute ( $sqlQuery ; "	" ; "" ) &¶& // sql preview
	If ( $sqlResult = "?" ;
		Let ( ~debug = False ; If ( ~debug ; SQLDebugResult ( $sqlResult ) ; False ) );
		$sqlResult
	)
)

 

In order to see any SQL errors, the above calculation must be used within FileMaker's Data Viewer. The SQLDebugResult custom function (below) is very simple and was a hidden "feature".

 

SQLDebugResult (custom function)
If ( sql = "?" ; "" ; sql )

//If passing in ExecuteSQL results in an error, return empty so the error will be returned within the Data Viewer.
  • No labels

14 Comments

  1. Based on a few twitter exchanges with Jeremy and having seen a session at PauseOnError 2012 in Portland, I was going to propose the use of FileMakerSqlSugar by Brian Schick.

  2. Things started out like this...

    Jeremy Bante #FileMaker module: SQL Sugar by @brian_schick - write clean #FileMaker SQL. modularfilemaker.org/2013/03/filema #mFM

    Matt Petrowsky @jbante I'm glad you found this. I worked with Brian to convert the code to fmstandards. Was going to propose it as our SQL standard.

    Jeremy Bante @filemakermag I'm happy to promote things people are doing at ModularFileMaker.org, but I have mixed feelings about SQL sugar for myself.

    Matt Petrowsky @jbante Elaborate? On the filemakerstandards.org?

    Jeremy Bante @filemakermag The @ ( objOrVal ; params ) function tries to do more than I'm usually comfortable with a single custom function doing.

    Jeremy Bante @filemakermag I find the resulting calcs less readable. It gives developers too many ways to do the same thing. Clarity over brevity!

    Jeremy Bante @filemakermag It's a 3rd syntax to learn on top of the 2 we already have with SQL calcs, and it doesn't spare me any cognitive load for it.

    Jeremy Bante @filemakermag It inserts more abstraction between my calc and the result without sparing me any details I can safely neglect to think about.

    Jeremy Bante @filemakermag In short, the SQL sugar functions feel more like syntactic saccharin to me.

    Matt Petrowsky @jbante Hmnn, it offers formatting clarity, field/table abstraction, collision avoidance and more at a small cost of the 3rd syntax.

    Jeremy Bante @filemakermag The same can be had with good conventions and more transparent functions without a 3rd syntax that I think is not so cheap

    Matt Petrowsky @jbante Let's discuss it on the site.

    Jeremy Bante @filemakermag It's awfully close to the one-function-for-everything anti-pattern parodied in it's demo file.

  3. To elaborate on some of my thoughts on the SQL sugar functions:

    • Now that we have a FileMaker-native ExecuteSQL function, it's a huge part of our lives, worth making easier. The first challenge to that is writing calculations for SQL queries that don't break when we modify table and field names, preserving for ourselves a uniquely liberating feature of FileMaker. Before the @ function(s), several different developers independently arrived at what amounts to the same solution: what I'll write here as SQLTable and SQLField function — each splitting a field reference to its constituent parts and quoting either always or as necessary. This is functionality I, and everyone else, wants to have.
    • Within that limited subset of @'s functionality, it does more than most other solutions to make not only the calculation pretty (which is not necessarily the same thing as more readable), but also the SQL. That's nice. It only wraps table and field names in quotes if it has to. On the one hand, prettier resulting SQL queries can be more comfortable to debug. On the other, how much more comfortable is it to debug, and is that worth the extra processing overhead? I'm not leaning either direction on this point, but it's worth addressing.
    • Readable code expresses the intention of the author while minimizing effort on the part of the reader. Organization and brevity can be instrumental for readability, but they are not readability. One of the goals of @ is improving the readability of SQL queries and the calculations that generate them, but I think it overshoots the target in multiple ways:
    1. The function syntax is anything but self-descriptive. Clever developers who see @ in use enough times will slowly piece together what it does, but it's generally easier to understand the authors' intentions *after* spending the effort to read the documentation, which is a heavy read as far as custom functions go. While good documentation is important, the code would be more readable if reading the documentation were less necessary (because that would minimize reader effort).
    2. The most extreme overloading I've ever seen in a custom function means that the same function can mean very different things each time it's used, increasing cognitive overhead. One rule of thumb we have from the UNIX programming world is "do one thing, and do it well." @ may work well, but it takes one look at "do one thing" and sprints as fast as it can in the opposite direction. This is not what function and operator overloading were created for in other languages — overloading is for doing the same thing with different inputs, not for doing different things with the same inputs or different things with different inputs.
    3. The parameter synonyms offer several ways to express the same intention, so that developers who wish to master reading the function have to learn more syntax than there is actual functionality provided by the function. Not wanting to impose a particular style on developers has a certain poetry to it, but this website wouldn't exist if there weren't value in developers agreeing to do the same things the same way. If another developer wants to do things differently, they can use a different set of functions more consistent with their taste.
    4. Method chaining substantially aids readability in languages that support it. FileMaker calculation syntax does not. Other programming paradigms are a useful source of inspiration, but some patterns only work well because they have language support. It's possible to do object oriented programming in C, but most of use would rather do it in C++ or Objective-C. Method chaining is great, but I don't think it translates well to FileMaker calculations.
    5. I think the parameter syntax of @ frequently obscures the intention of the author by creating a SQL-generation syntax that is so different from the desired result SQL syntax. Author intentions can usually be expressed more clearly by concatenating string literals than by using the @ function. Compare @ ( <tbl::fld> ; ". , _" ) to ", " & SQLField ( <tbl::fld> ). Compare @ ( <tbl::fld> ; ". lower() trim()") to "LOWER ( TRIM ( " & SQLField ( <tbl::fld ) & " ) )". Rather than passing the syntax of a third domain-specific language to a function and trust it to do the right thing, we can write exactly what we want in exactly the order we want it in the result. I might be more accepting if the @ function spared developers from needing to know SQL, but it doesn't even begin to do that.
    • The "@" syntax is likely to be confusing to developers used to seeing variables in SQL, even if FileMaker doesn't support them. The @ character looks like SQL syntax, rather than FileMaker syntax. It's useful for different things to look different, especially when they're placed so close together. I recommend that if we do recommend a best-practice set of functions for building SQL queries, we prefix them with "SQL".
    • The @.sqlQuery wrapper function is a nice piece of programming, but I think it should be advertised as a debugging tool rather than something to use in production code. It does a lot, which will take computing cycles, all to produce a result that should be functionally identical to the input.
    • The examples demonstrating the value of using @ all show cleanly-formatted SQL results. The same well-organized results are equally easy to achieve with good conventions and a handful of more humble custom functions, without substantially more thought on the part of the author. Even if it is more work, I think readability is more important than writability, and I've expressed my readability concerns above already. If only there were a website that could be used to promote readable programming conventions in FileMaker...
  4. Wow, well thought out and articulated! I think #5 and the last bullet points are some of the biggest arguments against adopting the overhead.

    I do like that the suite of @ functions can be copy/pasted in one shot, but as you mention I can now see that it does come with additional cognitive overhead.

    Here is the format that I've been using myself.

    Let ( [
    ~table = Quote ( TableName ( Table::field ) );
    ~field = Quote ( FieldName ( Table::field ) );
    ~sql = "SELECT DISTINCT ~field FROM ~table WHERE ~field=?";
    ~sql = Substitute ( ~sql ;
    [ "~table" ; ~table ];
    [ "~field" ; ~field ]
    )
    ];
    ExecuteSQL ( ~sql ; "" ; "" ; Table::field)
    )

     

    It is straight SQL with table/field abstraction. What are you using these days? Can we borrow some things from what Brian has done and promote using List() in order to write the SQL statement itself. Maybe just cleaning up what is above to be even more refined?

  5. The calculation structure I use with the ExecuteSQL function is still evolving. I'm trying to find good balance between:

    • Minimizing the cognitive effort needed to determine what SQL query to expect from a calculation generating it.
    • Keeping a clear distinction between SQL syntax and FileMaker calculation syntax.

    I think we're close.

    Everyone agrees our SQL-generating calculations should be robust to field name changes. (I'm reluctant to describe that as "abstraction," but I don't have a better word at the moment.) I agree that a custom function is the right tool to facilitate that; reading the name-change-safe calculations generated by Seed Code's SQL Explorer reinforced that point beyond any doubt. GetValue ( Substitute ( ... ) ) is simple enough, but not very helpful for expression our intention. Once thing the @ function adds to this operation is selective quoting, which makes the SQL result cleaner-looking at the cost of a little function complexity and processing overhead — I think that tradeoff is at least worth trying out for a while.

    Another thing I think most folks accept, though some have acknowledged more consciously than others, is that it's probably a better idea to just learn SQL than try to write functions that save us from having to. Any suite of tools we build that attempts to reproduce the same flexibility will have just as much of a learning curve as SQL itself, but then we'd be another step removed both in computation and in understanding from what we're actually doing. Earlier versions of my SQLField[name] and SQLTable[name] functions accepted arguments for field and table aliases, but I took them out because I decided I'm better off writing

    "SELECT a." & SQLField ( table::field ) & " FROM " & SQLTable ( table::field ) & " AS a"

    than

    "SELECT " & SQLField ( table::field ; "a" ; "" ) & " FROM " & SQLTable ( table::field ; "a" )

    Your format reminds me a lot of Dan Smith's ExecuteSQL Gist. His format is great for debugging in a script because it sets different values of interest to local variables that can be inspected in the data viewer. It's a handy pattern, but it feels to me perhaps a bit much to standardize on.

    One thing the Gist does differently from your example that's worth bringing up is that the table and field name encapsulation/indirection happens in-line with the concatenated SQL. The Gist puts the target FileMaker field in-line with where it will wind up in the SQL. Your example indirects the table and field names separately, and substitutes them in. Your example seems like a milder expression of problem 5 I complained about above, and there's more processing overhead, but the initial expression of the SQL query is awfully clean-looking. I wonder if that advantage holds as the query gets more complicated?

    I'm personally ambivalent about the use of List vs. concatenating pilcrows. I think the more interesting aspect of the calculation formatting from Brian's demos of the @ function is that the whitespace formatting of the calculation mirrors the formatting of the resulting SQL query. In Brian's demos, that's by convention; but I think it might be better if it were a necessary consequence of how the calculation is constructed:

    "SELECT"
    & "¶    " & SQLField ( Address::city )
    & "¶,   " & SQLField ( Address::state )
    & "¶,   " & SQLField ( Address::postCode )
    & "¶FROM " & SQLTable ( Address::postCode )
    & "¶WHERE "
    & "¶    " & SQLField ( Address::id ) & " = ?"

    or

    List (
    "SELECT" ;
    "    " & SQLField ( Address::city ) ;
    ",   " & SQLField ( Address::state ) ;
    ",   " & SQLField ( Address::postCode ) ;
    "FROM " & SQLTable ( Address::postCode ) ;
    "WHERE " ;
    "    " & SQLField ( Address::id ) & " = ?"
    )

    List appears to have an edge in this regard (at the cost of another indent level), but I'm happy to read either.

  6. I'm not as ambivalent about the use of List vs pilcrows. Personally, when I see pilcrows I like to interpret that they are serving some functional purpose where a hard return is actually needed. In this case, they are used for cleaner reading SQL. They become visually distracting to me.

    Regarding the use of the prefix SQL to SQLField and SQLTable. This is something I think we should forego. There are more than a few times when using Evaluate() that I've needed either the field or table name from a field reference - hence I've opted for simply TableName and FieldName. Unless the functions added something SQL specific, they seemed more general to me.

    I also like the fact that I can optionally Quote() the values. I don't think we should assume that we should protect all fields and tables. The best solution to this may be a function like SQLResult(), which could perform collision validation and also do error handling. It would be a wrapper around ExecuteSQL and could also provide the valuable (but unknown) aspect of SQL Debugging.

    Let ( [
    	~table = Quote ( TableName ( Table::field ) );
    	~field = Quote ( FieldName ( Table::field ) );
    	
    	~sql = List (
    			"SELECT ~field" ;
    			"FROM ~table" ;
    			"WHERE ~field=?" ;
    		);
    		
    	$sqlQuery = Substitute (
    		~sql ;
    		[ "~table" ; ~table ];
    		[ "~field" ; ~field ]
    	)
    ];
    	SQLResult ( $sqlQuery ; "" ; "" ; Table::field)
    )

    One thing I like about the above code is that I can see "what" I'm dealing with at top - before having to interpret the ~sql.  Very much like variable declaration. I don't have to dive into the SQL to assess the fields and tables used. The other advantage to using a Substitute method is the fact that you can use VERY descriptive variable names. This should make the SQL easier to read. Another advantage is that SQL does not use tilde and this blends with our standards. This is pretty much your standard printf format. And yet another bonus, is the fact that with Substitute you get some pre-processing before injection (if you ever needed it).

    I do admit  to the hassle of having to declare the variable then adding the substitution. This seems like a good chunk of redundancy when coding it.

    Another item I've added is the $sqlQuery variable. It would be nice, from solution to solution, to know that a reserved variable might be there to actually preview the authoritative query being run. By doing so, we could keep the SQL and execution within one script step - not that it's a priority.

    As mentioned, using SQLResult() would be a wrapper around ExecuteSQL which could handle errors, logging and debugging as desired.

    1. You make a compelling case for List, at least to sway me from ambivalence.

      I have also used non-SQL-specific functions for getting and isolating field and table names. But the example of @ demonstrates how there may still be value in keeping SQL-specific versions: I agree that we shouldn't assume that we should protect all field and table names, but when we should protect names to make valid SQL syntax, we should, and we might fruitfully spare ourselves from having to think about it with a SQL-specific function to decide when to and not to quote the names for us. I think this is much easier to accomplish before the names are part of the query than after. If we do adopt a function that sanitizes the query after it's constructed, I think the structure might work better like this:

      SQLDebugResult ( ExecuteSQL ( SQLSanitize ( $sqlQuery ) ; "" ; "" ; $value ; $value[2] ; $value[$n] ) )

      This way, each function has only one job (described by its name), and ExecuteSQL can still accept an arbitrary number of parameters.

      The thing I liked about your construction is almost opposite what you're liking about it. I'd be inclined to put the initial statement of the sql query at the top of the Let function — that way, when I'm skimming a script I can see the intention of the calculation before the script window truncates its display of the calculation without having to open the calculation dialog, and I can see it with the actual code instead of a leading comment that would have to be kept in sync with the code. Intention first; details second. I sympathize with descriptive variable names, but I hope that everyone is already using descriptive field names. Can you show an example of a query where the field name is descriptive enough to express it's purpose without any probing, but you'd want the variable name in the pre-substitute SQL query to express something different?

  7. I, too, was interested in Brian's FileMakerSQLSugar, but I share many of the same reservations as Jeremy Bante.

    I've erred on the side of verbosity in my SQL statements, and I use clippings to provide quick prototypes for any queries I might write. I'm a little surprised to see that Matt Petrowsky's syntax isn't that far from my own, since I've always thought my own style to be a bit quirky. However there are a few key differences and I offer my syntax here to provide another angle to the discussion.

    It should be noted this originated in FMP 9 using a plug-in for making SQL calls, so it doesn't quite match the native one in FMP 12. Also, I've been using double-underscores for my placeholders, but this is syntactically equivalent to the tildes in the standards here. It's a holdover of mine from programming in other languages.

     Here's the basic format of a query:

    Let ( [
      __expression =
       "SELECT __someField
        FROM __someTable JOIN __someOtherTable ON __someKeyField = __someOtherField
        WHERE __someKeyField = '__someTextValue' AND __someOtherField = '__someDateValue'";

      __expression = Substitute ( __expression;
       [ "__someTable"; getSQLFieldTable ( fieldName ; "" ) ];
       [ "__someKeyField"; getSQLFieldName ( fieldName; "" ) ];
       [ "__someField"; getSQLFieldName ( fieldName; "" ) ];
       [ "__someOtherTable"; getSQLFieldTable ( fieldName ; "" ) ];
       [ "__someOtherKeyField"; getSQLFieldName ( fieldName; "" ) ];
       [ "__someTextValue"; "CONSTANT" ];
       [ "__someDateValue"; getAsSQLDate ( $date ) ];
       [ "__someFieldValue"; getAsSQLText ( fieldName ) ];
       [ "__someValueList"; getAsSQLList ( listOfValues ) ];
       [ "__someString"; __someString ];
       [ "__someVariableValue"; $variableName ];
       [ "<> ''"; "IS NOT NULL" ];
       [ "= ''"; "IS NULL" ] );

     $result = executeSQL ( __expression ) //$result will hold the array of values returned
    ];

     Case (
       $result = "?"; "1200: SQL Parse error";
       IsEmpty ( $result ); "401: No records found";
       False
     )
    )

    First, like Jeremy Bante, the idea here is that I want to see the SQL query at the top so I know what's going on. My substitution placeholders are named as closely to the field and table names as is feasible to aid with query legibility.  

    I have two sets of custom functions for deriving SQL-friendly table and column names. Above I use getSQLFieldName() and getSQLFieldTable() which return quoted, fully qualified names. There is an optional parameter for table aliasing in case I need to reference a table more than once in a query. I have another set, getTableName() and getFieldName() that return unquoted table and field names, which are used in INSERT and UPDATE statements (via plug-in). Here are examples of how the custom functions work:

    FunctionReturns
    getTableName ( TABLE::field )TABLE
    getFieldName ( TABLE::field )field
    getSQLTableName( TABLE::field; "" )"TABLE"
    getSQLFieldName( TABLE::field; "" )"TABLE"."field"
    getSQLTableName( TABLE::field; "A" )"TABLE" as "A"
    getSQLFieldName( TABLE::field; "A" )"A"."field"

     

    The custom functions grew somewhat organically, and I'm not wedded to their use. For example, I could have written my query as

    SELECT "A".__someField
    FROM __someTable AS "A"

    which might be just as legible. Some of the other custom functions are just SQL niceties, such as getAsSQLDate() and getAsSQLList() which make for easier inclusion in a query. One thing I've done is append the words "Field" or "Table" to the placeholders to guard against substring substitutions. I could just as well have appended additional double-underscores (or tildes in case of your standards) such as:

     Substitute (
        [ "__itemName__"; getSQLFieldName ( ITEM::ItemName; "" ) ];

    Either way works for me, but for historical reasons I've done it the first way. The rest of the expression (the part that is returned by the Let() function)

    Case (
       $result = "?"; "1200: SQL Parse error";
       IsEmpty ( $result ); "401: No records found";
       False
    )

    is because I typically use this in a Set Variable[] script step that sets a $lastError variable that is checked in my script control structure. It provides (somewhat minimal) error checking and sets a script variable to either False if no error was encountered, or one of a couple different errors that my script will check and determine how to proceed.

     

    Reading through the proposals here, one thing I'm unclear about is the benefit of writing the SQL expression using the List() function or with pilcrows. That is, why is this

    List (
      "SELECT __someField";
      "FROM __someTable JOIN __someOtherTable ON __someKeyField = __someOtherField";
      "WHERE __someKeyField = '__someTextValue' AND __someOtherField = '__someDateValue'"
    )

    preferable to simply writing out the SQL query such as

    "SELECT __someField
     FROM __someTable JOIN __someOtherTable ON __someKeyField = __someOtherField
     WHERE __someKeyField = '__someTextValue' AND __someOtherField = '__someDateValue'";


    As far as I can tell, FileMaker ignores whitespace (for the most part) so we can write our queries out as in the second example without concern. As such, List() doesn't appear to add any benefit to me, other than mirroring somewhat the way Brian's @ function operates but I'll concede I may be missing something entirely.

  8. Very nice Corn! Thanks for providing your FileMaker wisdom! Nice to know I'm not way off base in how I've approached it. (smile)

    With what you've brought up about some of the aspects we didn't address, here is a list of what I see needs to be covered.

    • Pilcrows, List() or Nothing? (The obvious escaped me and I think your suggestion of nothing is best)
    • ~sql at top with field/table declarations following or the inverse. Or, order not specified. I'm personally flexible on this, but it would be nice to define the order. I can certainly go with Jeremy and put the SQL at top. I like his point about being able to read it within the script step without opening the dialog box. I hadn't considered this aspect.
    • Suggest a minimal set of SQL support functions including debugging or suggest just the names of the custom functions (as we've done in Reserved elements). In other words, do we always protect all fields or err on the side of readability. For example, with SQLFieldName() I can learn to know that it quotes automatically or by using Quote ( FieldName ( Table::fieldName ) ) I can read it. Granted, you have the extra keystrokes to typing Quote(), but it is pretty clear what's happening without having to dig into the custom function.
    • Address the topic of table aliases. (Something I didn't even consider - despite doing this a lot in my Drupal development). I would personally opt for the higher degree of control by specifying within the SQL instead of within a custom function. If the custom functions always assumed and used the table name then it removes my ability to use shorthand versions. It does apply a degree of conformity - but those familiar with SQL are accustomed to shortening table names to aliases as small as one letter - as you did above. Here's a modified version of the suggested syntax?

      Let ( [ ~sql = "
      	SELECT ~field
      	FROM t1.~table1
      	JOIN t2.~table2
      	ON t1.~field = t2.~field
      	WHERE ~field=?
      	ORDER BY ~field
      	";
      	~table1 = Quote ( TableName ( Table1::fieldName ) );
      	~table2 = Quote ( TableName ( Table2::fieldName ) );
      	~field  = Quote ( FieldName ( Table1::fieldName ) );
         
          $sqlQuery = Substitute ( ~sql ;
      		[ "~table1" ; ~table1 ];
      		[ "~table2" ; ~table2 ];
      		[ "~field" ; ~field ]
      	)
      ];
      	SQLDebugResult (
      		ExecuteSQL ( SQLSanitize ( $sqlQuery ) ; "" ; "" ;
      			$value ;
      			$value[2] ;
      			$value[$n]
      		)
      	)
      )

      I've mocked up the above based on what's been discussed. I've not tested it by any means and I've not personally used aliases within my FileMaker SQL because I'm not doing anything that complex yet. Does FileMaker require aliases to be quoted? I should open FileMaker... but I'm too lazy right now. (smile)

      How does this track with you guys?

    1. In the code example above, I think setting ~table and ~field variables is not necessary, why not just determine the value in the Substitute () function, like the code below?

      $sqlQuery = Substitute ( ~sql ;
      	[ "~table1" ; Quote ( TableName ( Table1::fieldName ) ) ];
      	[ "~table2" ; Quote ( TableName ( Table2::fieldName ) )];
      	[ "~field" ; Quote ( FieldName ( Table1::fieldName ) )]
      )

       

      I think I would still prefer to do error handling within the same Let statement, like:

      $sqlResult = ExecuteSQL ( $sqlQuery ; "" ; "" ;
      	$value
      ) ;
      $error = If (
      	$sqlResult = "?" ;
      	ErrorDataApp ( 17 ) & # ( "sqlQuery" ; $sqlQuery ) ;
      	$error
      )

      My reason for this is because I can easily modify the test for an error if I use a plugin to perform the SQL, either for working with older databases or for performing INSERT and UPDATE.

      I also like setting the result to an intermediary variable $sqlResult, which allows me to perform post-processing on the result within the same calculation, but to still see the raw result when stepping through the script, which helps when debugging.

    2. I used to do my substitutions separately as well, but ended up where Daniel Smith suggests. I found some of my queries and substitution lists were growing quite long, and doing it the way he suggests significantly cut down on the size of the calculation formula, enhancing readability in the process.

      While aliases don't require quotes (unless they are reserved words but why would you do that?), here's how I arrived at the position that the function should always quote. FMP 10 became more strict about parsing expressions with reserved words - even those it wasn't implementing - and some of the SQL expressions I had written in FMP 9 broke with the new release. In general it is best not to use reserved words, but quoting by default will prevent a regression if a table occurrence or field is renamed to a reserved word. I also had a scenario where a TO being used in an SQL expression was renamed to include a space. Had they been quoted, the SQL expression would have been protected against field and TO renaming (which is the whole point of using the custom function in the first place).

      Additionally, I personally find that using the Quote() function detracts from readability of calculation. While I can come up with plenty of reasons why you would want to quote table and field names, I can't come up with a good argument not to do so. Ultimately, however, this may be a question of style rather than standards.

      I have a set of custom functions all named similarly (e.g getAsSQLText()getAsSQLDate()...) that help format values for inclusion in SQL expressions, but in FMP12 I don't need them so much anymore (except getAsSQLList() and when using a plug-in for INSERT and UPDATE operations).  What does the SQLSanitize() function do?

  9. The SQLSanitize function is 1/3rd of my deconstruction of the functionality Matt suggested for a SQLResult function. It would serve a similar purpose to the @ SQL sugar's @.sqlQuery function: post-calculation/pre-evaluation clean-up of a SQL query. I'd prefer that we arrive at practices that don't use such a function, but if we do, I'd prefer that it be a separate function from execution and debugging.

  10. I would hope such a function isn't necessary, but perhaps there are others who might find utility in it. I try to ensure my SQL queries are structured correctly to begin with; I'm not sure I'd trust a function to catch all the possible ways I might screw up.

  11. I'm not personally in love with any visual clean up of the result and dont' see it as necessary. That could be an optional custom function that we create for fun. (smile) However, I threw it in there not knowing what the full intention for it was. I was actually thinking it might be the function that did the collision detection and quoting. As I had written the function, I was still using the more generic FieldName and TableName instead of SQL specific wrappers.

    This meant, in my view of how things were looking, that it was the "protecting" function.

    I'm totally in line with what Daniel and Corn mentioned about the superfluous variable declarations and just doing it within the substitute. Anything that reduces the amount of code and still communicates clarity is ideal. It's the elegant code that we're shooting for. Based on what Daniel mentioned about error trapping inline and the fact that Corn too is using plugin based SQL I think we should account for that and not just FileMaker's ExecuteSQL(). How about this revision...

    Let ( [ ~sql = "
    	SELECT ~field
    	FROM t1.~table1
    	JOIN t2.~table2
    	ON t1.~field = t2.~field
    	WHERE ~field=?
    	ORDER BY ~field
    	";
        $sqlQuery = Substitute ( ~sql ;
    		[ "~table1" ; SQLTableName ( Table1::fieldName ) ];
    		[ "~table2" ; SQLTableName ( Table2::fieldName ) ];
    		[ "~field" ; SQLFieldName ( Table1::fieldName ) ]
        )
    	$sqlResult = SQLDebugResult (
    		ExecuteSQL ( $sqlQuery ; "" ; "" ;
    			$value ;
    			$value[2] ;
    			$value[$n]
    		)
        )
    ];
    	If ( $sqlResult = "?" ;
    		False ;
    		True
    	)
    )

    The above would presumably be set into a Set Variable like $sqlSuccess?

    Given the above, we have initial SQL visibility as Jeremy wants within the script step. We have reduced verbosity per Daniel and Corn. We have collision avoidance and quoting within SQLTableName() and SQLFieldName(). Error trapping is inline by using $sqlResult and could easily be expanded to account for plugin use. SQLDebugResult() would be specific to FileMaker's ExecuteSQL()? (or do we manage to make it a wrapper for plugin usage too?). Table aliases in the above are not explicitly quoted - but this makes it more readable and clean to me. I'm not too fond of seeing a lot of backslash quotes if they aren't absolutely necessary.

    This is also where I was thinking SQLSanitize() could hunt for any "alias." prefix by looking for the period. Of course, since we don't have regex, pattern matching would be a bit more of a hassle. Theres also the potential for false positives on periods though. I would promote that we suggest using an alias naming convention which would have little, if any, potential for collision with reserved SQL keywords. Why not promote the standards of using a minimum number of letters which may follow the table names? People AS p, Customers AS c, Teachers AS t, etc.

    We now would have two reserved variables of $sqlQuery and $sqlResult - which doesn't bother me. The prefix of SQL is used for SQL specific functions. While Corn and others are very familiar with getters and setters, I think we should avoid the similarity to Get() functions for the sake of how they're called within FileMaker - although we probably use "Get" in some of our other custom functions on the site.

    Additional feedback?....