Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 'All object keys must match' #173

Closed
peachp opened this issue Jan 22, 2021 · 16 comments · Fixed by #417
Closed

Error 'All object keys must match' #173

peachp opened this issue Jan 22, 2021 · 16 comments · Fixed by #417
Assignees
Labels
postgrest-patch-needed Requires a change on PostgREST

Comments

@peachp
Copy link

peachp commented Jan 22, 2021

Bug report

Describe the bug

When trying to insert an array of objects, where in some objects the key field2 is present, and in others isn't, the JS API client errors to: 'All object keys must match'. The field2 is not mandatory.

const { data, error } = await supabase
  .from('mytable')
  .insert([
    {
      field1: 'something'     
    },
    {
      field1: 'something',
      field2: 'something'        
    }      
])

Desired behavior

The JS client / Supabase should default the mismatching fields to null or whatever is translated to NULL in Postgres. (As workaround I set the fields to null and it looks OK in my case.)

@ftonato
Copy link

ftonato commented Jan 22, 2021

Hello @peachp,

We can treat this as a bug, but it must be said that in SQL this expression is also not valid.

INSERT INTO TABLE_NAME (column_a,
                        column_b)
VALUES ('aaa'),
       ('aaa', 'bbb');

Error: Run query failed (INSERT has more target columns than expressions)

@peachp
Copy link
Author

peachp commented Jan 22, 2021

@ftonato yep, I know that the equivalent SQL wouldn't work, but that's kind of is the point of Supabase for me: an API wrapper around Postgres, which simplifies the hosting, design, as well as querying the DB.

I don't see any disadvantages to default missing / mismatching keys to null, except a tiny performance penalty due to checking all objects' keys before sending to server. I'm sure that something like Sequelize allow it.

If you still think there is some negative side effect or tradeoff, maybe you could at least make the error message more descriptive.

@ftonato
Copy link

ftonato commented Jan 22, 2021

I agree with you! It would be good if it worked, maybe it would make things easier in some scenarios (like perhaps the current one you are facing).

On the other hand, I wonder the operation cost to check if the missing fields accept empty values, if you have defined default values to use or if they have not, then should use null.

Anyway, I think you are right to open the discussion as a whole, but my comment above was more for treating as a "bug", something that is not described in the documentation.

@kiwicopple
Copy link
Member

I don't see any disadvantages to default missing / mismatching keys to null

Unfortunately setting it to null will try to explicitly insert a NULL value into the database (vs no value, which would let the database use a DEFAULT value.

Perhaps undefined, but I still don't think that would work. @peachp can you try adding field2: undefined to your code?

If that doesn't work we'd have to map over all the values, then run the inserts in batches of matching keys - but this is a bit of a fringe case so I know if it's worth the added complexity (perf loss) for our library

@peachp
Copy link
Author

peachp commented Jan 23, 2021

@ftonato I was not sure if it is a bug, so feel free to change it to enhancement / feature request.

@kiwicopple

Unfortunately setting it to null will try to explicitly insert a NULL value into the database (vs no value, which would let the database use a DEFAULT value.

Is it bad to to explicitly insert a NULL value?
And for DEFAULT values, does it mean that I always need to define a default value for each field, when creating a table, right?

I don't know the details of NULL vs Default vs "empty" in Postgres. For example with the ERP system I worked with, there were never NULL values in the DB - it made sure to write some default values like empty string or 0.

Perhaps undefined, but I still don't think that would work. @peachp can you try adding field2: undefined to your code?

It was a one-off migration task I already finished. However, my understanding is that field2 is already undefined. So if you have an object / record like this:

var obj = {
   field1: 'something'     
}

then obj.field2 already returns undefined and there is no point setting it to undefined. But let me know if you still want me to try it. Maybe there is a difference between initial undefined and explicitly set undefined in JS.

So currently I understand I have only these options then:

  • Set mismatching keys to null in JS code
  • Set mismatching keys to some default values in JS code (like '')
  • Define DEFAULT values for all my fields (not sure I can even define "empty string" when creating tables via Supbase Dashboard)

Or am I missing something?
It just sounds very tedious to always set default / null to each of the eventually missing keys every time I update or create records. Would be great to solve it once of everyone somehow - in JS client or server-side, I don't mind.

But again, I'm happy to be educated why it is good to always make the developer decide each time if it should be NULL or some default value.

@kiwicopple kiwicopple transferred this issue from supabase/supabase Apr 14, 2021
@Polyterative
Copy link

Polyterative commented Apr 22, 2021

I would like to know about this myself since I'm trying to upsert.

In my situation I can have references to the ids of the rows but in the same operation I also need to add new rows of which I do not have an identifier.

image

This returns {"message":"All object keys must match"}

summing up the upsert does not do what I expect that is to add new rows and update those already in db in the same operation.

this forces me to have to make two separate api calls for insert and update

@steve-chavez
Copy link
Member

I also need to add new rows of which I do not have an identifier.

A workaround for making null identifiers work is to create a trigger that converts the null to the default id. Like:

-- Change items and its sequence according to your table
CREATE OR REPLACE FUNCTION items_null_id_is_default() RETURNS TRIGGER AS $$
BEGIN
  NEW.id = coalesce(NEW.id, nextval('items_id_seq'));
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER items_null_id_is_default 
BEFORE INSERT ON items FOR EACH ROW EXECUTE PROCEDURE items_null_id_is_default();

Then you can avoid the All object keys must match error by adding a null id on the rows for which you don't have an identifier:

[
    {
      id: null,
      name: 'a name'
    },
    {
      id: 123,
      name: 'another name'        
    }
]

(upsert should work then).

Once PostgREST/postgrest#1567 is cleared out then the library will allow this out-of-the-box.

@JVariance
Copy link

I also need to add new rows of which I do not have an identifier.

A workaround for making null identifiers work is to create a trigger that converts the null to the default id. Like:

-- Change items and its sequence according to your table
CREATE OR REPLACE FUNCTION items_null_id_is_default() RETURNS TRIGGER AS $$
BEGIN
  NEW.id = coalesce(NEW.id, nextval('items_id_seq'));
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER items_null_id_is_default 
BEFORE INSERT ON items FOR EACH ROW EXECUTE PROCEDURE items_null_id_is_default();

Then you can avoid the All object keys must match error by adding a null id on the rows for which you don't have an identifier:

[
    {
      id: null,
      name: 'a name'
    },
    {
      id: 123,
      name: 'another name'        
    }
]

(upsert should work then).

Once PostgREST/postgrest#1567 is cleared out then the library will allow this out-of-the-box.

Do I have to create the sequence ('items_id_seq') manually or is it automatically created by supabase?

@JVariance
Copy link

I also need to add new rows of which I do not have an identifier.

A workaround for making null identifiers work is to create a trigger that converts the null to the default id. Like:

-- Change items and its sequence according to your table
CREATE OR REPLACE FUNCTION items_null_id_is_default() RETURNS TRIGGER AS $$
BEGIN
  NEW.id = coalesce(NEW.id, nextval('items_id_seq'));
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER items_null_id_is_default 
BEFORE INSERT ON items FOR EACH ROW EXECUTE PROCEDURE items_null_id_is_default();

Then you can avoid the All object keys must match error by adding a null id on the rows for which you don't have an identifier:

[
    {
      id: null,
      name: 'a name'
    },
    {
      id: 123,
      name: 'another name'        
    }
]

(upsert should work then).
Once PostgREST/postgrest#1567 is cleared out then the library will allow this out-of-the-box.

Do I have to create the sequence ('items_id_seq') manually or is it automatically created by supabase?

I found the answer here: https://stackoverflow.com/a/10332015.
The 'items_id_seq' is automatically created on table 'items' for 'items.id'.

@steve-chavez steve-chavez added the postgrest-patch-needed Requires a change on PostgREST label Nov 23, 2021
@shopsideau
Copy link

I would like to know about this myself since I'm trying to upsert.

In my situation I can have references to the ids of the rows but in the same operation I also need to add new rows of which I do not have an identifier.

image

This returns {"message":"All object keys must match"}

summing up the upsert does not do what I expect that is to add new rows and update those already in db in the same operation.

this forces me to have to make two separate api calls for insert and update

+1 for allowing this pattern

@UngererFabien
Copy link

It feels like it should at least allow the upsert with missmatching keys when a default value is provided for the missmatched keys in the table.

So that upsert like this is possible:
{ field1: 'something' field2: 'something' }, { field1: 'something', field2: undefined // default value provided or null authorised on table. }

@travisanderson
Copy link

+1 for this. I mean, I thought that is what upsert is for.

I have multiple rows...if I add a new one, it doesn't have an ID yet...so, I would expect existing items get updated, while the new one, gets inserted.

@noeleom
Copy link

noeleom commented Apr 25, 2022

+1 for this as well. trigger only feels like a workaround and not expected behavior.

@ScarVite
Copy link

ScarVite commented Dec 2, 2022

any update for this?

been quite for a while now.

@stevemk42
Copy link

..bumped into this error, you guys saved my day.
I can add the so-called missing fields into my table of objects to send for upsert, but.. doesnt seem intuitive.
I hope this gets fixed anytime soon

@steve-chavez
Copy link
Member

FYI, this is being worked at PostgREST/postgrest#2672.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
postgrest-patch-needed Requires a change on PostgREST
Projects
None yet
Development

Successfully merging a pull request may close this issue.